Premium

AWS Certified Solutions Architect - Professional Questions and Answers (Dumps and Practice Questions)



Question : You are tasked with moving a legacy application from a virtual machine running Inside your datacenter to an Amazon VPC. Unfortunately this app requires access to a
number of on-premises services and no one who configured the app still works for your company. Even worse there's no documentation for it. What will allow the application running
inside the VPC to reach back and access its internal dependencies without being reconfigured?

(Choose 3 answers)
A. An AWS Direct Connect link between the VPC and the network housing the internal services.
B. An Internet Gateway to allow a VPN connection.
C. An Elastic IP address on the VPC instance
D. An IP address space that does not conflict with the one on-premises
E. Entries in Amazon Route 53 that allow the Instance to resolve its dependencies IP addresses
F. A VM Import of the current virtual machine


 :  You are tasked with moving a legacy application from a virtual machine running Inside your datacenter to an Amazon VPC. Unfortunately this app requires access to a
1. A,B,C
2. C,D,E
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,D,F

Correct Answer : Get Lastest Questions and Answer :

Explanation: VM Import/Export enables you to easily import virtual machine images from your existing environment to Amazon EC2 instances and export them back to your on-premises
environment. This offering allows you to leverage your existing investments in the virtual machines that you have built to meet your IT security, configuration management, and
compliance requirements by bringing those virtual machines into Amazon EC2 as ready-to-use instances. You can also export imported instances back to your on-premises virtualization
infrastructure, allowing you to deploy workloads across your IT infrastructure. (Hence Statement F is correct)

AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS
and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network
experience than Internet-based connections.

AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Using industry standard 802.1q VLANs, this
dedicated connection can be partitioned into multiple virtual interfaces. This allows you to use the same connection to access public resources such as objects stored in Amazon S3
using public IP address space, and private resources such as Amazon EC2 instances running within an Amazon Virtual Private Cloud (VPC) using private IP space, while maintaining
network separation between the public and private environments. Virtual interfaces can be reconfigured at any time to meet your changing needs.(Hence statement A should be correct)

While designing your Amazon VPC, the CIDR block should be chosen in consideration with the number of IP addresses needed and whether we are going to establish connectivity with our
data center. The allowed block size is between a /28 netmask and /16 netmask. Amazon VPC can have contain from 16 to 65536 IP addresses. Currently Amazon VPC once created can't be
modified, so it is best to choose the CIDR block which has more IP addresses usually. Also when you design the Amazon VPC architecture to communicate with the on premise/data center
ensure your CIDR range used in Amazon VPC does not overlaps or conflicts with the CIDR blocks in your On premise/Data center. Note: If you are using same CIDR blocks while
configuring the customer gateway it may conflict.
E.g., Your VPC CIDR block is 10.0.0.0/16 and if you have 10.0.25.0/24 subnet in a data center the communication from instances in VPC to data center will not happen since the subnet
is the part of the VPC CIDR. In order to avoid these consequences it is good to have the IP ranges in different class. Example., Amazon VPC is in 10.0.0.0/16 and data center is in
172.16.0.0/24 series. (Statement D should be correct)

An Elastic IP address (EIP) is a static IP address designed for dynamic cloud computing. With an EIP, you can mask the failure of an instance or software by rapidly remapping the
address to another instance in your account. Your EIP is associated with your AWS account, not a particular instance, and it remains associated with your account until you choose to
explicitly release it. (We dont need any remapping of instances)

Based on this we have correct option 4








Question : Your system recently experienced down time during the troubleshooting process. You found that a new administrator mistakenly terminated several production EC
instances. Which of the following strategies will help prevent a similar situation in the future?
The administrator still must be able to:
- launch, start stop, and terminate development resources.
- launch and start production instances.
 :  Your system recently experienced down time during the troubleshooting process. You found that a new administrator mistakenly terminated several production EC
1. Create an IAM user, which is not allowed to terminate instances by leveraging production EC2 termination protection.
2. Leverage resource based tagging along with an IAM user, which can prevent specific users from terminating production EC2 resources.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Create an IAM user and apply an IAM role which prevents users from terminating production EC2 instances.

Correct Answer : Get Lastest Questions and Answer :
Explanation: Customers have been able to use IAM policies to control which of their users or groups could start, stop, reboot, and terminate instances across all EC2
instances under an account. With this release of EC2-based resource permissions, customers can now strictly control which IAM users or groups can start, stop, reboot, and terminate
specific EC2 instances. This ability to assign control of an individual instance to a specific user or group helps organizations implement important security principles like
separation of duties (preventing critical functions from being in the hands of one user) and least privilege ( providing each user access only to the minimum resources they need to
do their job). For example, you probably don't want to give everyone in your organization permission to terminate business-critical production instances, so now you can assign that
privilege to only a few trusted administrators. Below is a four-step process that will show you how to use our new resource-level permissions feature along with IAM policies to help
protect specific instances.

Step 1: Categorize your instances : The most flexible way to categorize your instances is by tagging them. For smaller environments, simple descriptive tags like "critical=true" may
be sufficient. For larger environments, you may want to organize your instances into more complex schemas using multiple tags. Some examples of tags that could be used to organize
your instances include: "stack=prod", "service_tier=1", "app=corporate_website", "layer=db", "department=engineering", "cost_center=153". For information on how to apply tags to your
instances, see the Tagging Your Amazon EC2 Resources documentation.
2. Define how authorized users can (or can't) manage specific instances : Because IAM policies are based on least privilege, users will not be able to manage your critical instances
unless you give them permission to do so. To create a policy that defines permissions to start, stop, reboot, and terminate specific instances (e.g., the ones with a specific tag
that you gave them, like "critical"), To add more security controls around users who manage specific instances, you may want to require the use of multi-factor authentication within
a time period (e.g. the last 15 minutes). You can also force users to come from a trusted source IP range when making the start, stop, reboot, or terminate requests.
In some contexts, you may optionally choose to explicitly deny a group of users the ability to manage specific instances. Explicit denial policies are not generally required, since
IAM is deny-all by default, but the use of an explicit deny policy can provide an additional layer of protection, since the presence of a deny statement will cause the user to be
denied the ability to perform an action even if another policy statement would have allowed it.
Step 3: Lock down your tags. If you choose to use tags as a basis for setting permissions on instances, you will want to restrict which users have permissions to apply and remove
tags. For EC2, you will want to restrict which users have permissions to use the ec2:CreateTags and ec2:DeleteTags actions, so that only these users will be able to change your
instance inventory. Note: We will be enabling tag-specific permissions for ec2:CreateTag and ec2:DeleteTags in the future, which will enable you to set permissions on a per-tag
basis.
Step 4: Attach your policies to IAM users : As with other IAM policies, you can attach any of the policies above to any IAM principal, including individual IAM users, groups, and
roles. The policies will only apply to the principals they are attached to, so you will want to perform periodic audits to ensure that the policies have been deployed appropriately.

Enabling Termination Protection for an Instance : By default, you can terminate your instance using the Amazon EC2 console, command line interface, or API. If you want to prevent
your instance from being accidentally terminated using Amazon EC2, you can enable termination protection for the instance. The DisableApiTermination attribute controls whether the
instance can be terminated using the console, CLI, or API. By default, termination protection is disabled for your instance. You can set the value of this attribute when you launch
the instance, while the instance is running, or while the instance is stopped (for Amazon EBS-backed instances).

The DisableApiTermination attribute does not prevent you from terminating an instance by initiating shutdown from the instance (using an operating system command for system shutdown)
when the InstanceInitiatedShutdownBehavior attribute is set. For more information, see Changing the Instance Initiated Shutdown Behavior. You can't prevent instances that are part of
an Auto Scaling group from terminating using termination protection. However, you can specify which instances should terminate first.







Question : Your fortune company has under taken a TCO (total cost of ownership) analysis evaluating the use of Amazon S versus acquiring more hardware. The outcome was that
all employees would be granted access to use Amazon S3 for storage of their personal documents. Which of the following will you need to consider so you can set up a solution that

incorporates single sign-on from your corporate AD or LDAP directory and restricts access for each user to a designated user folder in a bucket? (Choose 3 Answers)

A. Setting up a federation proxy or identity provider
B. Using AWS Security Token Service to generate temporary tokens
C. Tagging each folder in the bucket
D. Configuring IAM role
E. Setting up a matching IAM user for every user in your corporate directory that needs access to a folder in the bucket

  : Your fortune  company has under taken a TCO (total cost of ownership) analysis evaluating the use of Amazon S versus acquiring more hardware. The outcome was that
1. A,B,C
2. C,D,E
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,C,E
Correct Answer
: Get Lastest Questions and Answer :
Explanation: Statment E is not at all correct, as we wish to use existing infrastructure.

Here's the basic flow of an AD user who wants to access the AWS Management Console:
1. User signs on to the corporate network with their AD credentials. The sample creates an internal website that hosts a proxy server. A user browses to that website. The site
authenticates the user against AD and displays a set of IAM roles that are determined by the users AD group membership.
2. The user selects the desired role and clicks on Sign in to AWS Management Console. Behind the scenes, the proxy calls the AWS Security Token Service (STS) to assume the selected
role. The response includes temporary security credentials. Using these credentials, the federation proxy constructs a temporary sign-in URL.
3. Access Mostly Uused Products by 50000+ Subscribers
limited to the privileges defined in the role they selected.
By default, the console session will expire after one hour and the console will be inaccessible. This helps protect your AWS account in the case where users mistakenly leave their
computers unlocked while signed in to the console. When the session expires the user will be redirected to a page that includes a URL to the site that hosts the federation proxy.
Clicking the URL will re-direct the user to the site that hosts the federation proxy so they can re-authenticate. (Hence A and B is correct)
A question came up about whether you can use this technique for federated users instead of for IAM users, as the examples show. Yes you can, but not exactly the same way. Federated
users do not have an entity inside of IAM. Therefore, the aws:username variable is not available when using federated users. However, when you work with federated users (using the
AWS STS GetFederationToken or AssumeRole APIs), you're using a proxy server to request temporary security credentials on behalf of the federated user. When you request temporary
security credentials you have the option of passing a policy as part of the API request. Therefore, before you call GetFederationToken or AssumeRole, you can create a policy and
replace the federated user's name where the aws:username variable is used.

As such there is no folder inside the bucket. Hence A,B and C should be correct. (There is a little confusion between option C and D, please validate the same).
Refer : http://blogs.aws.amazon.com/security/post/Tx1P2T3LFXXCNB5/Writing-IAM-policies-Grant-access-to-user-specific-folders-in-an-Amazon-S3-bucke (This Link is available in study
notes)



Related Questions


Question : An application is running Hadoop jobs. The application reads data from DynamoDB and generates a temporary file of TBs.
The whole process runs for 60 minutes and the output of the job is stored to S3. Which of the below mentioned options is
the most cost effective solution in this case?
 : An application is running Hadoop jobs. The application reads data from DynamoDB and generates a temporary file of  TBs.
1. Use an on demand instance to run Hadoop jobs and configure them with EBS volumes for persistent storage.
2. Use Spot Instances to run Hadoop jobs and configure them with ephermal storage for output file storage.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Use an on demand instance to run Hadoop jobs and configure them with ephemeral storage for output file storage.


Question : QuickTechie.com has setup a web application in the AWS VPC. The organization is running a database on the EC instance
and the application server connects to the DB server only on the internal IP. The organization is looking for HA and DR for the database.
Which of the below mentioned options fulfils the organization's need for a DB backup?
 : QuickTechie.com has setup a web application in the AWS VPC. The organization is running a database on the EC instance
1. Setup the database on the instance with an elastic network interface which will have a fixed private IP address and also keep a hot standby running in a separate zone
with a different subnet.
2. Setup the database in the private subnet and keep a hot standby running in the public subnet for immediate failover.
3. Access Mostly Uused Products by 50000+ Subscribers
with a different subnet.
4. Use the AWS storage gateway with VPC to switchover from the primary to secondary DB in separate zones.



Question : QuickTechie.com has people in the IT operations team who are responsible to manage the AWS infrastructure.
QuickTechie wants to setup that only the information security team manager from this team can change the rules of
the security group in the VPC. Which of the below mentioned IAM policies will help in this scenario?
 : QuickTechie.com has  people in the IT operations team who are responsible to manage the AWS infrastructure.
1. { "Version": "2012-10-17", "Statement":[{ "Effect":"Allow", "Action": [ "ec2:AuthorizeSecurityGroupIngress", "ec2:AuthorizeSecurityGroupEgress",
"ec2:RevokeSecurityGroupIngress", "ec2:RevokeSecurityGroupEgress"], "Resource": "arn:aws:ec2:region:account:security-group/*", } }, { "Effect": "Allow", "Action":
"ec2:DescribeSecurityGroups", "Resource": "*" } ] }
2. { "Version": "2012-10-17", "Statement":[{ "Effect":"Deny", "Action": [ "ec2:AuthorizeSecurityGroupIngress", "ec2:AuthorizeSecurityGroupEgress",
"ec2:RevokeSecurityGroupIngress", "ec2:RevokeSecurityGroupEgress"], "Resource": "arn:aws:ec2:region:account:security-group/*", } } ] }
3. Access Mostly Uused Products by 50000+ Subscribers
"ec2:RevokeSecurityGroupIngress", "ec2:RevokeSecurityGroupEgress"], } } ] }
4. { "Version": "2012-10-17", "Statement":[{ "Effect":"Allow", "Action": [ "vpc:AuthorizeSecurityGroupIngress", "vpc:AuthorizeSecurityGroupEgress"], "Resource":
"arn:aws:ec2:region:account:security-group/*", } } ] }



Question : QuickTechie.com has hosted a tomcat based web application on AWS EC and opened port for the selected IPs and port for everyone else.
The organization has noticed that over the weekend their AWS usage increased by a few hundred dollars because there was data transfer in the range
of 50-60 TB that happened during the week end. The organization did not run any special program which could cause this transfer.
What could be the potential source for a breach in the security?
  : QuickTechie.com has hosted a tomcat based web application on AWS EC and opened port  for the selected IPs and port  for everyone else.
1. QuickTechie.com might have enabled UDP ports for data transfer.
2. QuickTechie.com might have enabled TCP ports for data transfer.
3. Access Mostly Uused Products by 50000+ Subscribers
4. QuickTechie.com might not have changed the default admin password of the tomcat manager.



Question : QuickTechie.com provides scalable and secure SAAS to its clients. They are planning to host a web server and App server on AWS VPC as separate
tiers. The organization wants to implement the scalability by configuring Auto Scaling and load balancer with their app servers (middle tier) too.
Which of the below mentioned options suits their requirements?
 : QuickTechie.com provides scalable and secure SAAS to its clients. They are planning to host a web server and App server on AWS VPC as separate
1. The user should make ELB with EC2-CLASSIC and enable SSH with it for security.
2. Since ELB is internet facing, it is recommended to setup HAProxy as the Load balancer within the VPC.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Create an Internal Load balancer with VPC and register all the App servers with it.


Question : QuickTechie.com is trying to setup AWS VPC with Auto Scaling. Which of the below mentioned steps is
not required to be configured by the organization to setup AWS VPC?
 :  QuickTechie.com is trying to setup AWS VPC with Auto Scaling. Which of the below mentioned steps is
1. Configure the Auto Scaling group with the VPC ID in which instances will be launched.
2. Configure the Auto Scaling Launch configuration with the VPC security group.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Configure the Auto Scaling Launch configuration which does not allow assigning a public IP to instances.