Premium

AWS Certified Solutions Architect - Professional Questions and Answers (Dumps and Practice Questions)



Question : You are looking to migrate your Development (Dev) and Test environments to AWS. You have decided to use separate AWS accounts to host each environment. You plan to link
each accounts bill to a Master AWS account using Consolidated Billing. To make sure you keep within budget you would like to implement a way for administrators in the Master
account to have access to stop, delete and/or terminate resources in both the Dev and Test accounts. Identify which option will allow you to achieve this goal.


 : You are looking to migrate your Development (Dev) and Test environments to AWS. You have decided to use separate AWS accounts to host each environment. You plan to link
1. Create IAM users in the Master account with full Admin permissions. Create crossaccount roles in the Dev and Test accounts that grant the Master account access to the
resources in the account by inheriting permissions from the Master account.
2. Create IAM users and a cross-account role in the Master account that grants full Admin permissions to the Dev and Test accounts.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Link the accounts using Consolidated Billing. This will give IAM users in the Master account access to resources in the Dev and Test accounts


Correct Answer : Get Lastest Questions and Answer :


Explanation: To use a role to delegate access to resources that are in different AWS accounts that you own. You'll share resources in one account with users in a different account. By
setting up cross-account access in this way, you don't need to create individual IAM users in each account, and users don't have to sign out of one account and sign into another
in order to access resources that are in different AWS accounts. Dev and Test will create a cross-account role with admin permissions and will grant the master account to use
this role.





Question : You've been brought in as solutions architect to assist an enterprise customer with their migration of an e-commerce platform to Amazon Virtual Private Cloud (VPC).
The previous architect has already deployed a 3-tier VPC. The configuration is as follows:
VPC vpc-2f8t>C447
IGW ig-2d8bc445
NACL acl-2080c448

Subnets and Route Tables:
Web server's subnet-258bc44d
Application server's subnet-248bc44c
Database server's subnet-9189c6f9

Route Tables:
rrb-218bc449
rtb-238bc44b

Associations:
subnet-258bc44d rtb-218bc449
Subnet-248bc44c rtb-238tX44b
subnet-9189c6f9 rtb-238bc44b
You are now ready to begin deploying EC2 instances into the VPC. Web servers must have direct access to the internet. Application and database servers cannot have direct access to
the internet. Which configuration below will allow you the ability to remotely administer your application and database servers, as well as allow these servers to retrieve updates
from the Internet?
  : You've been brought in as solutions architect to assist an enterprise customer with their migration of an e-commerce platform to Amazon Virtual Private Cloud (VPC).
1. Create a bastion and NAT Instance in subnet-248bc44c and add a route from rtb-238bc44b to subnet-258bc44d.
2. Add a route from rtb-238bc44b to igw-2d8bc445 and add a bastion and NAT instance within suonet-248bc44c.
3. Access Mostly Uused Products by 50000+ Subscribers
subnet-248bc44c.
4. Create a bastion and NAT instance in subnet-258bc44d and add a route from rtb-238bc44b to the NAT instance.

Correct Answer : Get Lastest Questions and Answer : Its three tier architecture. As NAT instances to be in Public Subnet wherever IGW exist. And database and appliacation tier should be in Proivate subnet.

Private Subnet : subnet-248bc44c , Private Subnet : subnet-9189c6f9 , Public subnet : 258bc44d (Web tier)
Route table association : subnet-258bc44d rtb-218bc449 (Public) , Subnet-248bc44c rtb-238tX44b (Private) , subnet-9189c6f9 rtb-238bc44b (Private)

So NAT and Bastion should be in public subnet (subnet-258bc44d rtb-218bc449) (Option 1 and 2 is out). And IGW must be able to connect from private subnet hence route required
between. An Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the Internet. It
therefore imposes no availability risks or bandwidth constraints on your network traffic. An Internet gateway serves two purposes: to provide a target in your VPC route tables for
Internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IP addresses. Attach an Internet gateway to your VPC. Ensure
that your subnet's route table points to the Internet gateway.Ensure that instances in your subnet have public IP addresses or Elastic IP addresses. Ensure that your network access
control and security group rules allow the relevant traffic to flow to and from your instance.







Question : You have deployed a web application targeting a global audience across multiple AWS Regions under the domain name.example.com. You decide to use Route Latency-Based
Routing to serve web requests to users from the region closest to the user. To provide business continuity in the event of server downtime you configure weighted record sets
associated with two web servers in separate Availability Zones per region. During a DR test you notice that when you disable all web servers in one of the regions Route53 does
not automatically direct all users to the other region. What could be happening? (Choose 2 answers)

A. Latency resource record sets cannot be used in combination with weighted resource record sets.
B. You did not setup an http health check for one or more of the weighted resource record sets associated with the disabled web servers.
C. The value of the weight associated with the latency alias resource record set in the region with the disabled servers is higher than the weight for the other region.
D. One of the two working web servers in the other region did not pass its HTTP health check.
E. You did not set "Evaluate Target Health" to "Yes" on the latency alias resource record set associated with example.com in the region where you disabled the servers.



  : You have deployed a web application targeting a global audience across multiple AWS Regions under the domain name.example.com. You decide to use Route Latency-Based
1. A,C
2. D,E
3. Access Mostly Uused Products by 50000+ Subscribers
4. B,C


Correct Answer : Get Lastest Questions and Answer :
Explanation: To discover the availability of your EC2 instances, the load balancer periodically sends pings, attempts connections, or sends requests to test the EC2
instances. These tests are called health checks. The status of the instances that are healthy at the time of the health check is InService. The status of any instances that are
unhealthy at the time of the health check is OutOfService. The load balancer performs health checks on all registered instances, whether the instance is in a healthy state or an
unhealthy state.

The load balancer routes requests only to the healthy instances. When the load balancer determines that an instance is unhealthy, it stops routing requests to that instance. The load
balancer resumes routing requests to the instance when it has been restored to a healthy state.

The load balancer checks the health of the registered instances using either the default health check configuration provided by Elastic Load Balancing or a health check configuration
that you configure.

If you have associated your Auto Scaling group with a load balancer, you can use the load balancer health check to determine the health state of instances in your Auto Scaling group.
By default, an Auto Scaling group periodically determines the health state of each instance.

For both latency alias resource record sets, you set the value of Evaluate Target Health to Yes.
You use the Evaluate Target Health setting for each latency alias resource record set to make Amazon Route 53 evaluate the health of the alias targets-the weighted resource record
sets-and respond accordingly.
Amazon Route 53 receives a query for example.com. Based on the latency for the user making the request, Amazon Route 53 selects the latency alias resource record set for the
us-east-1 region.

Amazon Route 53 selects a weighted resource record set based on weight. Evaluate Target Health is Yes for the latency alias resource record set, so Amazon Route 53 checks the health
of the selected weighted resource record set.

The health check failed, so Amazon Route 53 chooses another weighted resource record set based on weight and checks its health. That resource record set also is unhealthy.

Amazon Route 53 backs out of that branch of the tree, looks for the latency alias resource record set with the next-best latency, and chooses the resource record set for
ap-southeast-2.

Amazon Route 53 again selects a resource record set based on weight, and then checks the health of the selected resource record set. The health check passed, so Amazon Route 53
returns the applicable value in response to the query.





Related Questions


Question : A -tier e-commerce web application is current deployed on-premises and will be migrated to AWS for greater scalability and elasticity. The web server currently shares
read-only data using a network distributed file system. The app server tier uses a clustering mechanism for discovery and shared session state that depends on IP multicast. The
database tier uses shared-storage clustering to provide database fail over capability, and uses several read slaves for scaling. Data on all servers and the distributed file system
directory is backed up weekly to off-site tapes.

Which AWS storage and database architecture meets the requirements of the application?

  : A -tier e-commerce web application is current deployed on-premises and will be migrated to AWS for greater scalability and elasticity. The web server currently shares
1. Web servers, store read-only data in S3, and copy from S3 to root volume at boot time. App servers share state using a combination of DynamoDB and IP unicast.
Database use RDS with multi-AZ deployment and one or more Read Replicas. Backup web and app servers backed up weekly via AMIS. Database backed up via DB snapshots.
2. Web servers, store-read-only data in S3, and copy from S3 to root volume at boot time. App servers share state using a combination of DynamoDB and IP unicast
Database use RDS with multi-AZ deployment and one or more read replicas. Backup web servers app servers, and database backed up weekly to Glacier using snapshots.
3. Access Mostly Uused Products by 50000+ Subscribers
Database use RDS with multi-AZ deployment. Backup web and app servers backed up weekly via AMIs. Database backed up via DB snapshots.
4. Web servers, store read-only data in an EC2 NFS server, mount to each web server at boot time. App servers share state using a combination of DynamoDB and IP
multicast. Database use RDS with multl-AZ deployment and one or more Read Replicas. Backup web and app servers backed up weekly via AMI. Database backed up via DB snapshots



Question : Your customer wishes to deploy an enterprise application to AWS which will consist of several web servers, several application servers and a small (GB) Oracle
database information is stored, both in the database and the file systems of the various servers. The backup system must support database recovery whole server and whole disk
restores, and individual file restores with a recovery time of no more than two hours. They have chosen to use RDS Oracle as the database. Which backup architecture will meet these
requirements?
  : Your customer wishes to deploy an enterprise application to AWS which will consist of several web servers, several application servers and a small (GB) Oracle
1. Backup RDS using automated daily DB backups. Backup the EC2 instances using AMIs and supplement with file-level backup to S3 using traditional enterprise backup
software to provide file level restore.
2. Backup RDS using a Multi-AZ Deployment Backup the EC2 instances using Amis, and supplement by copying file system data to S3 to provide file level restore.
3. Access Mostly Uused Products by 50000+ Subscribers
traditional enterprise backup software to provide file level restore.
4. Backup RDS database to S3 using Oracle RMAN Backup, the EC2 instances using Amis, and supplement with EBS snapshots for individual volume restore.


Question : You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of
around 100 sensors for 3 months. Each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS. During the pilot, you measured a peak or 10 IOPS on the database,
and you stored an average of 3GB of sensor data per month in the database. The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a
PostgreSQL RDS database with 500GB standard storage. The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan
requires a deployment of at least 1O0K sensors which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over
year Improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup win meet the requirements?

  : You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of
1. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance
2. Ingest data into a DynamoDB table and move old data to a Redshift cluster
3. Access Mostly Uused Products by 50000+ Subscribers
4. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS


Question : Your company produces customer commissioned one-of-a-kind skiing helmets combining high fashion with custom technical enhancements. Customers can show off their
Individuality on the ski slopes and have access to head-up-displays , GPS rear-view cams and any other technical innovation they wish to embed in the helmet.
The current manufacturing process is data rich and complex including assessments to ensure that the custom electronics and materials used to assemble the helmets are to the
highest standards. Assessments are a mixture of human and automated assessments you need to add a new set of assessment to model the failure modes of the custom electronics
using GPUs with CUDA across a cluster of servers with low latency networking. What architecture would allow you to automate the existing process using a hybrid approach and ensure
that the architecture can support the evolution of processes over time?
  : Your company produces customer commissioned one-of-a-kind skiing helmets combining high fashion with custom technical enhancements. Customers can show off their
1. Use AWS Data Pipeline to manage movement of data and meta-data and assessments. Use an auto-scaling group of G2 instances in a placement group.
2. Use Amazon Simple Workflow (SWF) to manages assessments, movement of data and meta-data. Use an auto-scaling group of G2 instances in a placement group.
3. Access Mostly Uused Products by 50000+ Subscribers
Virtualization).
4. Use AWS data Pipeline to manage movement of data and meta-data and assessments use auto-scaling group of C3 with SR-IOV (Single Root I/O virtualization).


Question : You are developing a new mobile application and are considering storing user preferences in AWS. This would provide a more uniform cross-device experience to users
using multiple mobile devices to access the application. The preference data for each user is estimated to be 50KB in size. Additionally 5 million customers are expected to use the
application on a regular basis. The solution needs to be cost-effective, highly available, scalable and secure, how would you design a solution to meet the above requirements?
  : You are developing a new mobile application and are considering storing user preferences in AWS. This would provide a more uniform cross-device experience to users
1. Setup an RDS MySQL instance in 2 availability zones to store the user preference data. Deploy a public facing application on a server in front of the database to
manage security and access credentials
2. Setup a DynamoDB table with an item for each user having the necessary attributes to hold the user preferences. The mobile application will query the user
preferences directly from the DynamoDB table. Utilize STS, Web Identity Federation, and DynamoDB Fine Grained Access Control to authenticate and authorize access.
3. Access Mostly Uused Products by 50000+ Subscribers
preferences from the read replicas. Leverage the MySQL user management and access privilege system to manage security and access credentials.
4. Store the user preference data in S3. Setup a DynamoDB table with an item for each user and an item attribute pointing to the user' S3 object. The mobile
application will retrieve the S3 URL from DynamoDB and then access the S3 object directly utilize STS, Web identity Federation, and S3 ACLs to authenticate and authorize access.


Question : A company is building a voting system for a popular TV show, viewers win watch the performances then visit the show's website to vote for their favorite performer. It
is expected that in a short period of time after the show has finished the site will receive millions of visitors. The visitors will first login to the site using their Amazon.com
credentials and then submit their vote. After the voting is completed the page will display the vote totals. The company needs to build the site such that can handle the rapid
influx of traffic while maintaining good performance but also wants to keep costs to a minimum. Which of the design patterns below should they use?

  : A company is building a voting system for a popular TV show, viewers win watch the performances then visit the show's website to vote for their favorite performer. It
1. Use CloudFront and an Elastic Load balancer in front of an auto-scaled set of web servers, the web servers will first can the Login With Amazon service to
authenticate the user then process the users vote and store the result into a multi-AZ Relational Database Service instance.
2. Use CloudFront and the static website hosting feature of S3 with the Javascript SDK to call the Login With Amazon service to authenticate the user, use IAM Roles
to gain permissions to a DynamoDB table to store the users vote.
3. Access Mostly Uused Products by 50000+ Subscribers
authenticate the user, the web servers will process the users vote and store the result into a DynamoDB table using IAM Roles for EC2 instances to gain permissions to the DynamoDB
table.
4. Use CloudFront and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers will first call the Login with Amazon service to
authenticate the user, the web servers will process the users vote and store the result into an SQS queue using IAM Roles for EC2 Instances to gain permissions to the SQS queue. A
set of application servers will then retrieve the items from the queue and store the result into a DynamoDB table.