Premium

AWS Certified Solutions Architect - Professional Questions and Answers (Dumps and Practice Questions)



Question : You are implementing AWS Direct Connect. You intend to use AWS public service end points such as Amazon S, across the AWS Direct Connect link. You want other Internet
traffic to use your existing link to an Internet Service Provider. What is the correct way to configure AWS Direct connect for access to services such as Amazon S3?
 : You are implementing AWS Direct Connect. You intend to use AWS public service end points such as Amazon S, across the AWS Direct Connect link. You want other Internet
1. Configure a public Interface on your AWS Direct Connect link. Configure a static route via your AWS Direct Connect link that points to Amazon S3. Advertise a default
route to AWS using BGP.
2. Create a private interface on your AWS Direct Connect link. Configure a static route via your AWS Direct connect link that points to Amazon S3. Configure specific
routes to your network in your VPC.
3. Access Mostly Uused Products by 50000+ Subscribers
to AWS.
4. Create a private interface on your AWS Direct connect link. Redistribute BGP routes into your existing routing infrastructure and advertise a default route to AWS.


Correct Answer : Get Lastest Questions and Answer s: Exp: Private virtual interface : A virtual interface is the VLAN that transports AWS Direct Connect traffic. A private virtual interface supports sending traffic
to a single virtual private cloud (VPC). A public virtual interface supports sending traffic to public services of AWS such as Amazon Simple Storage Service (Amazon S3).

Hence option 2 and 4 is out.
To connect to public AWS products such as Amazon EC2 and Amazon S3, you need to provide the
following:
. A public ASN that you own (preferred) or a private ASN.
. Public IP addresses (/31) (that is, one for each end of the BGP session) for each BGP session. If you
do not have public IP addresses to assign to this connection, log on to AWS and then open a ticket
with AWS Support.
. The public routes that you will advertise over BGP
In the Define Your New Public Virtual Interface dialog box, do the following:
a. In the Connection field, select an existing physical connection on which to create the virtual interface.
b. In the Interface Name field, enter a name for the virtual interface.
c. In Interface Owner, select the My AWS Account option if the virtual interface is for your AWS account ID.
d. In the VLAN # field, enter the ID number for your virtual local area network (VLAN); for example, a number between 1 and 4094.
e. In the Your router peer IP field, enter the IPv4 CIDR destination address where traffic should be sent.
f. In the Amazon router peer IP field, enter the IPv4 CIDR address you will use to send traffic to Amazon Web Services.
g. In the BGP ASN field, enter the Border Gateway Protocol (BGP) Autonomous System Number (ASN) of your gateway; for example, a number between 1 and 65534.
h. Select Auto-generate BGP key check box to have AWS generate one. To provide your own BGP key, clear the Auto-generate BGP key check box, and then in the BGP Authorization Key
field, enter your BGP MD5 key.
i. In the Prefixes you want to advertise field, enter the IPv4 CIDR destination addresses (separated by commas) where traffic should be routed to you over the virtual interface.



Q. Can I use the same private network connection with Amazon Virtual Private Cloud (VPC) and other AWS services simultaneously?
Yes. Each AWS Direct Connect connection can be configured with one or more virtual interfaces. Virtual interfaces may be configured to access AWS services such as Amazon EC2 and
Amazon S3 using public IP space, or resources in a VPC using private IP space.




Question : An administrator is using Amazon CloudFormation to deploy a three tier web application that consists of a web tier and application tier that will utilize Amazon
DynamoDB for storage when creating the CloudFormation template which of the following would allow the application instance access to the DynamoDB tables without exposing API
credentials?
 :  An administrator is using Amazon CloudFormation to deploy a three tier web application that consists of a web tier and application tier that will utilize Amazon
1. Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and associate the Role to the
application instances by referencing an instance profile.
2. Use the Parameter section in the Cloud Formation template to have the user input Access and Secret Keys from an already created IAM user that has the permissions
required to read and write from the required DynamoDB table.
3. Access Mostly Uused Products by 50000+ Subscribers
instance profile property of the application instance.
4. Create an identity and Access Management user in the CioudFormation template that has permissions to read and write from the required DynamoDB table, use the GetAtt
function to retrieve the Access and secret keys and pass them to the application instance through user-data.

Correct Answer : Get Lastest Questions and Answer :
Explanation: Manage Credentials for Applications Running on Amazon EC2 Instances : If you have an application that runs on an Amazon EC2 instance and needs to make
requests to AWS resources such as Amazon S3 buckets or an DynamoDB table, the application requires AWS security credentials. However, distributing and embedding long-term security
credentials in every instance that you launch is a challenge and a potential security risk. Instead of using long-term credentials, like IAM user credentials, we recommend that you
create an IAM role that is associated with an Amazon EC2 instance when the instance is launched. An application can then get temporary security credentials from the Amazon EC2
instance. You don't have to embed long-term credentials on the instance. Also, to make managing credentials easier, you can specify just a single role for multiple Amazon EC2
instances; you don't have to create unique credentials for each instance. For a template snippet that shows how to launch an instance with a role, see IAM Role Template Examples.
Note : Applications on instances that use temporary security credentials can call any AWS CloudFormation actions. However, because AWS CloudFormation interacts with many other AWS
services, you must verify that all the services that you want to use support temporary security credentials. For more information, see AWS Services that Support AWS STS.
{ "AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"myEC2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
.......
"IamInstanceProfile": {
"Ref": "RootInstanceProfile"
} } },
"RootRole": { "Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
......
} },
"RolePolicies": { "Type": "AWS::IAM::Policy",
"Properties": {
.......
"Roles": [ { "Ref": "RootRole" } ]
} },
"RootInstanceProfile": { "Type": "AWS::IAM::InstanceProfile",
"Properties": {
"Path": "/",
"Roles": [ { "Ref": "RootRole" } ]
} } } }






Question : Your company has an on-premises multi-tier PHP web application, which recently experienced downtime due to a large burst In web traffic due to a company announcement
Over the coming days, you are expecting similar announcements to drive similar unpredictable bursts, and are looking to find ways to quickly improve your infrastructures
ability to handle unexpected increases in traffic. The application currently consists of 2 tiers. A web tier which consists of a load balancer and several Linux Apache web servers as
well as a database tier which hosts a Linux server hosting a MySQL database. Which scenario below will provide full site functionality, while helping to improve the ability
of your application in the short timeframe required?
 :  Your company has an on-premises multi-tier PHP web application, which recently experienced downtime due to a large burst In web traffic due to a company announcement
1. Offload traffic from on-premises environment. Setup a CloudFront distribution and configure CloudFront to cache objects from a custom origin. Choose to customize your
object cache behavior, and select a TTL that objects should exist in cache.
2. Migrate to AWS. Use VM import `Export to quickly convert an on-premises web server to an AMI create an Auto Scaling group, which uses the imported AMI to scale the web
tier based on incoming traffic. Create an RDS read replica and setup replication between the RDS instance and on-premises MySQL server to migrate the database.
3. Access Mostly Uused Products by 50000+ Subscribers
failover to the S3 hosted website.
4. Hybrid environment. Create an AMI which can be used of launch web servers in EC2. Create an Auto Scaling group which uses the AMI to scale the web tier based on
incoming traffic. Leverage Elastic Load Balancing to balance traffic between on-premises web servers and those hosted in AWS.


Correct Answer : Get Lastest Questions and Answer :

Explanation: Migrate Your Existing Applications and Workloads to Amazon EC2
Migrate your existing VM-based applications and workloads to Amazon EC2. Using VM Import, you can preserve the software and settings that you have configured in your existing VMs,
while benefiting from running your applications and workloads in Amazon EC2. Once your applications and workloads have been imported, you can run multiple instances from the same
image, and you can create Snapshots to backup your data. You can use AMI and Snapshot copy to replicate your applications and workloads around the world. You can change the instance
types that your applications and workloads use as their resource requirements change. You can use CloudWatch to monitor your applications and workloads after you have imported them.
And you can take advantage of AutoScaling, Elastic Load Balancing, and all of the other Amazon Web Services to support your applications and workloads after you have migrated them to
Amazon EC2.
On-premises Instances
AWS OpsWorks cannot stop or start a registered on-premises instance.

Unassigning a registered on-premises instance triggers a Shutdown event. However, that event simply runs the assigned layers' Shutdown recipes. They perform tasks such as shutting
down services, but do not stop the instance.

AWS OpsWorks cannot autoheal a registered on-premises instance if it fails, but the instance will be marked as connection lost.

On-premises instances cannot use the Elastic Load Balancing, Amazon EBS, or Elastic IP address services.



Related Questions


Question : QuickTechie.com is setting up a multi-site solution where the application runs on premise as well as on AWS to achieve the minimum RTP. Which of the
below mentioned configurations will not meet the requirements of the multi-site solution scenario?
 : QuickTechie.com is setting up a multi-site solution where the application runs on premise as well as on AWS to achieve the minimum RTP. Which of the
1. Configure data replication based on RTO.
2. Setup a single DB instance which will be accessed by both sites.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Setup a weighted DNS service like Route 53 to route traffic across sites.




Question : If a disaster occurs at : PM (noon) and the RPO is one hour, the system should recover all data that was in the system

 :  If a disaster occurs at : PM (noon) and the RPO is one hour, the system should recover all data that was in the system
1. before 11:00 AM
2. before 12:00 PM
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of above



Question : In the scenerio AWS Production to an AWS DR Solution Using Multiple AWS Regions
When you replicate data to a remote location, you should consider

A. Distance between the sites
B. Available bandwidth
C. Data rate required by your application
D. Replication technology
 : In the scenerio AWS Production to an AWS DR Solution Using Multiple AWS Regions
1. A,B,C
2. B,C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,B,C,D



Question : if a disaster occurs at : PM (noon) and the RTO is
eight hours, the DR process should restore the business process to the acceptable service level by_________

 : if a disaster occurs at : PM (noon) and the RTO is
1. 8:00 PM
2. 9:00 PM
3. Access Mostly Uused Products by 50000+ Subscribers
4. 00:00 AM




Question : QuickTechie.com is having a VPC for the Billing Team, and another VPC for the Training department.
The Billing team requires access to all the instances running in the Training Team VPC while the Training Team requires
access to all the resources in the Billing Team. How can the organization setup this scenario?


 : QuickTechie.com is having a VPC for the Billing Team, and another VPC for the Training department.
1. Setup ACL with both VPCs which will allow traffic from the CIDR of the other VPC.
2. Setup VPC peering between the VPCs of Training Team and Billing Team.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Setup the security group with each VPC which allows traffic from the CIDR of another VPC
5. None of above



Question : QuickTechie.com has hosted a web application which allows traffic on port from all the IPs and attached the same security group to multiple
instances running in the same VPC but different subnets. QuickTechie.com is planning to use one of these instances for testing an web application running on port
8080. How can QuickTechie setup this case so security of all the instances are not affected ?
 : 	QuickTechie.com has hosted a web application which allows traffic on port  from all the IPs and attached the same security group to multiple
1. QuickTechie.com should launch an instance in a separate subnet so that they will have a different security group.
2. QuickTechie.com should attach an ENI with every instance. The organization should create a new security group and update the security group of that instance's ENI.
3. Access Mostly Uused Products by 50000+ Subscribers
selected IP.
4. QuickTechie.com should first stop the instance and then change the security group of the selected instance.