Premium

AWS Certified Solutions Architect - Professional Questions and Answers (Dumps and Practice Questions)



Question : You are responsible for a legacy web application whose server environment is approaching
end of life You would like to migrate this application to AWS as quickly as possible, since
the application environment currently has the following limitations:
The VM's single 10GB VMDK is almost full
Me virtual network interface still uses the 10Mbps driver, which leaves your
100Mbps WAN connection completely underutilized
It is currently running on a highly customized. Windows VM within a VMware
environment:
You do not have me installation media
This is a mission critical application with an RTO (Recovery Time Objective) of 8 hours.
RPO (Recovery Point Objective) of 1 hour. How could you best migrate this application to
AWS while meeting your business continuity requirements?

  : You are responsible for a legacy web application whose server environment is approaching
1. Use the EC2 VM Import Connector for vCenter to import the VM into EC2.
2. Use Import/Export to import the VM as an EBS snapshot and attach to EC2.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Use me ec2-bundle-instance API to Import an Image of the VM into EC2


Answer: 1

Explanation: You import the VMDK and send AWS to load the same (It does not seems to a good solution for 10GB VMDK file).
VM Import/Export enables you to easily import virtual machine images from your existing environment to Amazon EC2 instances and export them back to your on-premises environment. This
offering allows you to leverage your existing investments in the virtual machines that you have built to meet your IT security, configuration management, and compliance requirements
by bringing those virtual machines into Amazon EC2 as ready-to-use instances. You can also export imported instances back to your on-premises virtualization infrastructure, allowing
you to deploy workloads across your IT infrastructure.

VM Import/Export is available at no additional charge beyond standard usage charges for Amazon EC2 and Amazon S3.
To import your images, use the AWS CLI or other developer tools to import a virtual machine (VM) image from your VMware environment. If you use the VMware vSphere virtualization
platform, you can also use the AWS Management Portal for vCenter to import your VM. As part of the import process, VM Import will convert your VM into an Amazon EC2 AMI, which you
can use to run Amazon EC2 instances. Once your VM has been imported, you can take advantage of Amazon's elasticity, scalability and monitoring via offerings like Auto Scaling,
Elastic Load Balancing and CloudWatch to support your imported images.
You can export previously imported EC2 instances using the Amazon EC2 API tools. You simply specify the target instance, virtual machine file format and a destination S3 bucket, and
VM Import/Export will automatically export the instance to the S3 bucket. You can then download and launch the exported VM within your on-premises virtualization infrastructure.







Question : You are migrating a legacy client-server application to AWS The application responds to a
specific DNS domain (e g www example com) and has a 2-tier architecture, with multiple
application servers and a database server Remote clients use TCP to connect to the
application servers. The application servers need to know the IP address of the clients in
order to function properly and are currently taking that information from the TCP socket A
Multi-AZ RDS MySQL instance will be used for the database.
During the migration you can change the application code but you have to file a change
request.
How would you implement the architecture on AWS In order to maximize scalability and
high ability?

  : You are migrating a legacy client-server application to AWS The application responds to a
1. File a change request to implement Proxy Protocol support In the application Use an
ELB with a TCP Listener and Proxy Protocol enabled to distribute load on two application
servers in different AZs.
2. File a change request to Implement Cross-Zone support in the application Use an ELB
with a TCP Listener and Cross-Zone Load Balancing enabled, two application servers in
different AZs.
3. Access Mostly Uused Products by 50000+ Subscribers
Use Route 53 with Latency Based Routing enabled to distribute load on two application
servers in different AZs.
4. File a change request to implement Alias Resource support in the application Use Route
53 Alias Resource Record to distribute load on two application servers in different AZs.


Answer: 1
Explanation: Proxy Protocol is an Internet protocol used to carry connection information from the source requesting the connection to the destination for which the connection was
requested. Elastic Load Balancing uses Proxy Protocol version 1, which uses a human-readable header format.

By default, when you use Transmission Control Protocol (TCP) or Secure Sockets Layer (SSL) for both front-end and back-end connections, your load balancer forwards requests to the
back-end instances without modifying the request headers. If you enable Proxy Protocol, a human-readable header is added to the request header with connection information such as the
source IP address, destination IP address, and port numbers. The header is then sent to the back-end instance as part of the request.

You can enable Proxy Protocol on ports that use either the SSL and TCP protocols. You can use Proxy Protocol to capture the source IP of your client when you are using a non-HTTP
protocol, or when you are using HTTPS and not terminating the SSL connection on your load balancer.





Question : Your department creates regular analytics reports from your company's log files All log data is collected in Amazon S and processed by daily Amazon Elastic MapReduce
(EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse. Your CFO requests that you optimize the cost structure for this
system. Which of the following alternatives will lower costs without compromising average performance of the system or data integrity for the raw data?
  : Your department creates regular analytics reports from your company's log files All log data is collected in Amazon S and processed by daily Amazon Elastic MapReduce
1. Use reduced redundancy storage (RRS) for PDF and csv data in Amazon S3. Add Spot instances to Amazon EMR jobs Use Reserved Instances for Amazon Redshift.
2. Use reduced redundancy storage (RRS) for all data in S3. Use a combination of Spot instances and Reserved Instances for Amazon EMR jobs use Reserved instances fors
Amazon Redshift.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Use reduced redundancy storage (RRS) for PDF and csv data in S3. Add Spot Instances to EMR jobs Use Spot Instances for Amazon Redshift.


Answer: 1

Explanation: Data which can be generated again we can use RRS S3 storage but not for all the data. Like logs can not be regenerated. Hence option 2 and 3 is out.
Get the Best Value for Amazon EC2 Capacity
Spot instances provide the reliability, security, performance, control, and elasticity of Amazon EC2, at low market-driven prices that decrease even further when demand subsides.
Reduce Operating Costs
Reduce your operating costs by 50-90% with Spot, compared to On-Demand instances. Amazon EC2 Spot instances are spare EC2 instances that you can bid on to run your cloud computing
applications. Since Spot instances are often available at a lower price, you can significantly reduce the cost of running your applications, grow your application's compute capacity
and throughput for the same budget, and enable new types of cloud computing applications.

So for EMR : Spot instabces. An Amazon Redshift data warehouse is an enterprise-class relational database query and management system.

Amazon Redshift supports client connections with many types of applications, including business intelligence (BI), reporting, data, and analytics tools.

When you execute analytic queries, you are retrieving, comparing, and evaluating large amounts of data in multiple-stage operations to produce a final result.

Amazon Redshift achieves efficient storage and optimum query performance through a combination of massively parallel processing, columnar data storage, and very efficient, targeted
data compression encoding schemes.



Related Questions


Question : You deployed your company website using Elastic Beanstalk and you enabled log file
rotation to S3. An Elastic Map Reduce job is periodically analyzing the logs on S3 to build a
usage dashboard that you share with your CIO. You recently improved overall performance
of the website using Cloud Front for dynamic content delivery and your website as the
origin
After this architectural change, the usage dashboard shows that the traffic on your website
dropped by an order of magnitude. How do you fix your usage dashboard'?

  : You deployed your company website using Elastic Beanstalk and you enabled log file
1. Enable Cloud Front to deliver access logs to S3 and use them as input of the Elastic Map Reduce job.
2. Turn on Cloud Trail and use trail log tiles on S3 as input of the Elastic Map Reduce job
3. Access Mostly Uused Products by 50000+ Subscribers
4. Use Elastic Beanstalk "Rebuild Environment" option to update log delivery to the Elastic Map Reduce job.
5. Use Elastic Beanstalk 'Restart App server(s)" option to update log delivery to the Elastic Map Reduce job.




Question : A large real-estate brokerage is exploring the option of adding a cost-effective location based alert to their existing mobile application. The application backend
infrastructure currently runs on AWS. Users who opt in to this service will receive alerts on their mobile device regarding real-estate offers in proximity to their location.
For the alerts to be relevant, delivery time needs to be in the few minute count. And an existing mobile app has 5 million users across the USA. Which one of the following
architectural suggestions would you make to the customer?
  : A large real-estate brokerage is exploring the option of adding a cost-effective location based alert to their existing mobile application. The application backend
1. The mobile application will submit its location to a web service endpoint utilizing Elastic Load Balancing and EC2 instances. DynamoDB will be used to store and
retrieve relevant offers. EC2 instances will communicate with mobile carriers/device providers to push alerts back to mobile application.
2. Use AWS DirectConnect or VPN to establish connectivity with mobile carriers. EC2 instances will receive the mobile applications's location through carrier connection.
RDS will be used to store and relevant offers. EC2 instances will communicate with mobile carriers to push alerts back to the mobile application.
3. Access Mostly Uused Products by 50000+ Subscribers
AWS Mobile Push will be used to send offers to the mobile application
4. The mobile application will send device location using AWS Mobile Push. EC2 instances will retrieve the relevant offers from DynamoDB.
EC2 instances will communicate with mobile carriers/device providers to push alerts back to the mobile application.




Question : Your company is in the process of developing a next generation pet collar that collects biometric information to assist families with promoting healthy lifestyles for
their pets. Each collar will push 30kb of biometric data In JSON format in every 2 seconds to a collection platform that will process and analyze the data providing health trending
information back to the pet owners and veterinarians via a web portal. Management has tasked you to architect the collection platform ensuring the following requirements are met.
Provide the ability for real-time analytics of the inbound biometric data. Ensure processing of the biometric data is highly durable. Elastic and parallel. The results of the
analytic processing should be persisted for data mining. Which architecture outlined below wil meet the initial requirements for the collection platform?
 : Your company is in the process of developing a next generation pet collar that collects biometric information to assist families with promoting healthy lifestyles for
1. Utilize S3 to collect the inbound sensor data, analyze the data from S3 with a daily scheduled Data Pipeline and save the results to a Redshift Cluster.
2. Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Redshift cluster using EMR.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Utilize EMR to collect the inbound sensor data, analyze the data from EMR with Amazon Kinesis and save the results to DynamoDB.


Question : You are looking to migrate your Development (Dev) and Test environments to AWS. You have decided to use separate AWS accounts to host each environment. You plan to link
each accounts bill to a Master AWS account using Consolidated Billing. To make sure you keep within budget you would like to implement a way for administrators in the Master
account to have access to stop, delete and/or terminate resources in both the Dev and Test accounts. Identify which option will allow you to achieve this goal.


 : You are looking to migrate your Development (Dev) and Test environments to AWS. You have decided to use separate AWS accounts to host each environment. You plan to link
1. Create IAM users in the Master account with full Admin permissions. Create crossaccount roles in the Dev and Test accounts that grant the Master account access to the
resources in the account by inheriting permissions from the Master account.
2. Create IAM users and a cross-account role in the Master account that grants full Admin permissions to the Dev and Test accounts.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Link the accounts using Consolidated Billing. This will give IAM users in the Master account access to resources in the Dev and Test accounts



Question : You've been brought in as solutions architect to assist an enterprise customer with their migration of an e-commerce platform to Amazon Virtual Private Cloud (VPC).
The previous architect has already deployed a 3-tier VPC. The configuration is as follows:
VPC vpc-2f8t>C447
IGW ig-2d8bc445
NACL acl-2080c448

Subnets and Route Tables:
Web server's subnet-258bc44d
Application server's subnet-248bc44c
Database server's subnet-9189c6f9

Route Tables:
rrb-218bc449
rtb-238bc44b

Associations:
subnet-258bc44d rtb-218bc449
Subnet-248bc44c rtb-238tX44b
subnet-9189c6f9 rtb-238bc44b
You are now ready to begin deploying EC2 instances into the VPC. Web servers must have direct access to the internet. Application and database servers cannot have direct access to
the internet. Which configuration below will allow you the ability to remotely administer your application and database servers, as well as allow these servers to retrieve updates
from the Internet?
  : You've been brought in as solutions architect to assist an enterprise customer with their migration of an e-commerce platform to Amazon Virtual Private Cloud (VPC).
1. Create a bastion and NAT Instance in subnet-248bc44c and add a route from rtb-238bc44b to subnet-258bc44d.
2. Add a route from rtb-238bc44b to igw-2d8bc445 and add a bastion and NAT instance within suonet-248bc44c.
3. Access Mostly Uused Products by 50000+ Subscribers
subnet-248bc44c.
4. Create a bastion and NAT instance in subnet-258bc44d and add a route from rtb-238bc44b to the NAT instance.


Question : You have deployed a web application targeting a global audience across multiple AWS Regions under the domain name.example.com. You decide to use Route Latency-Based
Routing to serve web requests to users from the region closest to the user. To provide business continuity in the event of server downtime you configure weighted record sets
associated with two web servers in separate Availability Zones per region. During a DR test you notice that when you disable all web servers in one of the regions Route53 does
not automatically direct all users to the other region. What could be happening? (Choose 2 answers)

A. Latency resource record sets cannot be used in combination with weighted resource record sets.
B. You did not setup an http health check for one or more of the weighted resource record sets associated with the disabled web servers.
C. The value of the weight associated with the latency alias resource record set in the region with the disabled servers is higher than the weight for the other region.
D. One of the two working web servers in the other region did not pass its HTTP health check.
E. You did not set "Evaluate Target Health" to "Yes" on the latency alias resource record set associated with example.com in the region where you disabled the servers.



  : You have deployed a web application targeting a global audience across multiple AWS Regions under the domain name.example.com. You decide to use Route Latency-Based
1. A,C
2. D,E
3. Access Mostly Uused Products by 50000+ Subscribers
4. B,C