Premium

AWS Certified Solutions Architect - Professional Questions and Answers (Dumps and Practice Questions)



Question : Your company hosts a social media site supporting users in multiple countries. You have
been asked to provide a highly available design for the application that leverages multiple
regions for the most recently accessed content and latency sensitive portions of the website.
The most latency sensitive component of the application involves reading user
preferences to support web site personalization and ad selection.
In addition to running your application in multiple regions, which option will support this
application's requirements?



 :  Your company hosts a social media site supporting users in multiple countries. You have
1. Serve user content from S3. CloudFront and use Route53 latency-based routing
between ELBs in each region Retrieve user preferences from a local DynamoDB table in
each region and leverage SQS to capture changes to user preferences with SOS workers
for propagating updates to each table.
2. Use the S3 Copy API to copy recently accessed content to multiple regions and serve
user content from S3. CloudFront with dynamic content and an ELB in each region
Retrieve user preferences from an ElasticCache cluster in each region and leverage SNS
notifications to propagate user preference changes to a worker node in each region.
3. Access Mostly Uused Products by 50000+ Subscribers
user content from S3 CloudFront and Route53 latency-based routing Between ELBs In
each region Retrieve user preferences from a DynamoDB table and leverage SQS to
capture changes to user preferences with SOS workers for propagating DynamoDB
updates.
4. Serve user content from S3. CloudFront with dynamic content, and an ELB in each
region Retrieve user preferences from an ElastiCache cluster in each region and leverage
Simple Workflow (SWF) to manage the propagation of user preferences from a centralized
DB to each ElastiCache cluster.

Answer: 2

Explanation: ElastiCache is good for faster data retrieval, which is regularly used. A typical website generally contains a mix of static and dynamic content. Static content includes
images or style sheets; dynamic or application generated content includes elements of your site that are personalized to each viewer. Previously, developers who wanted to improve
the performance and reliability of their dynamic content had limited options, as the solutions offered by traditional CDNs are expensive, hard to configure and difficult to
manage. With Amazon CloudFront, there are no additional costs for serving dynamic content beyond Amazon CloudFront's existing low prices for data transfer and requests, and no
required long term commitments for use. There are also no up-front fees, no monthly platform fees, and no need to hire expensive consultants to help with configuration.






Question : Your company has HQ in Tokyo and branch offices all over the world and is using a
logistics software with a multi-regional deployment on AWS in Japan, Europe and USA.
The logistic software has a 3-tier architecture and currently uses MySQL 5.6 for data
persistence. Each region has deployed its own database
In the HQ region you run an hourly batch process reading data from every region to
compute cross-regional reports that are sent by email to all offices this batch process must
be completed as fast as possible to quickly optimize logistics how do you build the
database architecture in order to meet the requirements'?
 :   Your company has HQ in Tokyo and branch offices all over the world and is using a
1. For each regional deployment, use RDS MySQL with a master in the region and a read
replica in the HQ region
2. For each regional deployment, use MySQL on EC2 with a master in the region and send
hourly EBS snapshots to the HQ region
3. Access Mostly Uused Products by 50000+ Subscribers
hourly RDS snapshots to the HQ region
4. For each regional deployment, use MySQL on EC2 with a master in the region and use
S3 to copy data files hourly to the HQ region
5. Use Direct Connect to connect all regional MySQL deployments to the HQ region and
reduce network latency for the batch process

Answer: 1


Explanation: Direct Connect is not a solution for such problem. (5 th is out). Amazon RDS Read Replicas provide enhanced performance and durability for Database (DB) Instances. This
replication feature makes it easy to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. You can create one or more
replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read
replicas can also be promoted, so that they become standalone DB Instances.
Read replicas are available in Amazon RDS for MySQL, PostgreSQL, and Amazon Aurora. When you create a read replica, you specify an existing DB Instance as the source. Amazon RDS
takes a snapshot of the source instance and creates a read-only instance from the snapshot. For MySQL and PostgreSQL, Amazon RDS uses those engines' native asynchronous replication
to update the read replica whenever there is a change to the source DB instance. The read replica operates as a DB instance that allows only read-only connections; applications can
connect to a read replica just as they would to any DB instance. Amazon RDS replicates all databases in the source DB instance.
Amazon Aurora employes an SSD-backed virtualized storage layer purpose-built for database workloads. Amazon Aurora Replicas and share the same underlying storage as the source
instance, lowering costs and avoiding the need to copy data to the replica nodes.





Question : You need a persistent and durable storage to trace call activity of an IVR (Interactive Voice
Response) system. Call duration is mostly in the 2-3 minutes timeframe. Each traced call
can be either active or terminated. An external application needs to know each minute the
list of currently active calls, which are usually a few calls/second. Put once per month there
is a periodic peak up to 1000 calls/second for a few hours. The system is open 24/7 and
any downtime should be avoided. Historical data is periodically archived to files. Cost
saving is a priority for this project.
What database implementation would better fit this scenario, keeping costs as low as
possible?



  : You need a persistent and durable storage to trace call activity of an IVR (Interactive Voice
1. Use RDS Multi-AZ with two tables, one for "Active calls" and one for "Terminated calls".
In this way the "Active calls" table is always small and effective to access.
2. Use DynamoDB with a "Calls" table and a Global Secondary Index on a "IsActive'"
attribute that is present for active calls only. In this way the Global Secondary index is
sparse and more effective.
3. Access Mostly Uused Products by 50000+ Subscribers
that can equal to "active" or "terminated" in this way the Global Secondary index can be
used for all Items in the table.
4. Use RDS Multi-AZ with a "CALLS" table and an Indexed "STATE* field that can be equal
to 'ACTIVE" or "TERMINATED" In this way the SOL query Is optimized by the use of the
Index.


Answer: 2
Explanation: Take Advantage of Sparse Indexes

For any item in a table, DynamoDB will only write a corresponding entry to a global secondary index if the index key value is present in the item. For global secondary indexes, this
is the index partition key and its sort key (if present). If the index key value(s) do not appear in every table item, the index is said to be sparse.

You can use a sparse global secondary index to efficiently locate table items that have an uncommon attribute. To do this, you take advantage of the fact that table items that do not
contain global secondary index attribute(s) are not indexed at all. For example, in the GameScores table, certain players might have earned a particular achievement for a game - such
as "Champ" - but most players have not. Rather than scanning the entire GameScores table for Champs, you could create a global secondary index with a partition key of Champ and a
sort key of UserId. This would make it easy to find all the Champs by querying the index instead of scanning the table.

Such a query can be very efficient, because the number of items in the index will be significantly fewer than the number of items in the table. In addition, the fewer table
attributes you project into the index, the fewer read capacity units you will consume from the index.


Related Questions


Question : Your company policies require encryption of sensitive data at rest. You are considering the possible options for protecting data while storing it at rest on an EBS
data volume, attached to an EC2 instance. Which of these options would allow you to encrypt your data at rest?
(Choose 3 answers)

A. Implement third party volume encryption tools
B. Do nothing as EBS volumes are encrypted by default
C. Encrypt data inside your applications before storing it on EBS
D. Encrypt data using native data encryption drivers at the file system level
E. Implement SSL/TLS for all services running on the server

  :  Your company policies require encryption of sensitive data at rest. You are considering the possible options for protecting data while storing it at rest on an EBS
1. A,B,C
2. B,C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,E,D
5. B,D,E


Question : You currently operate a web application in the AWS US-East region. The application runs on an auto-scaled layer of EC instances and an RDS Multi-AZ database. Your IT
security compliance officer has tasked you to develop a reliable and durable logging solution to track changes made to your EC2, IAM And RDS resources. The solution must ensure the
integrity and confidentiality of your log data. Which of these solutions would you recommend?
  : You currently operate a web application in the AWS US-East region. The application runs on an auto-scaled layer of EC instances and an RDS Multi-AZ database. Your IT
1. Create a new CloudTrail trail with one new S3 bucket to store the logs and with the global services option selected. Use IAM roles S3 bucket policies and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs.
2. Create a new cloudTrail with one new S3 bucket to store the logs. Configure SNS to send log file delivery notifications to your management system. Use IAM roles and S3 bucket policies on the S3 bucket mat stores your logs.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Create three new CloudTrail trails with three new S3 buckets to store the logs one for the AWS Management console, one for AWS SDKs and one for command line tools Use IAM roles and S3 bucket policies on the S3 buckets that store your logs.


Question : To serve Web traffic for a popular product. Your chief financial officer and IT director have
purchased 10 ml large heavy utilization Reserved Instances (RIs) evenly spread across two
availability zones. Route 53 is used to deliver the traffic to an Elastic Load Balancer (ELB).
After several months, the product grows even more popular and you need additional
capacity. As a result, your company purchases two C3.2xlarge medium utilization Reserved Instances. You
register the two c3 2xlarge instances with your ELB and quickly find that the ml large
instances are at 100% of capacity and the c3 2xlarge instances have significant capacity
that's unused. Which option is the most cost effective and uses EC2 capacity most
effectively?


  : To serve Web traffic for a popular product. Your chief financial officer and IT director have
1. Use a separate ELB for each instance type and distribute load to ELBs with Route 53 weighted round robin
2. Configure Autoscaning group and Launch Configuration with ELB to add up to 10 more
on-demand mi large instances when triggered by Cloudwatch shut off c3 2xiarge instances
3. Access Mostly Uused Products by 50000+ Subscribers
based routing and health checks shut off ELB
4. Configure ELB with two c3 2xiarge Instances and use on-demand Autoscailng group for
up to two additional c3.2xlarge instances Shut on mi .large instances.



Question : You are designing the network infrastructure for an application server in Amazon VPC. Users will access all the application instances from the Internet as well as from
an on-premises network. The on-premises network is connected to your VPC over an AWS Direct Connect link.
How would you design routing to meet the above requirements?
 :  You are designing the network infrastructure for an application server in Amazon VPC. Users will access all the application instances from the Internet as well as from
1. Configure a single routing Table with a default route via the Internet gateway. Propagate a default route via BGP on the AWS Direct Connect customer router. Associate
the routing table with all VPC subnets.
2. Configure a single routing table with a default route via the internet gateway. Propagate specific routes for the on-premises networks via BGP on the AWS Direct
Connect customer router. Associate the routing table with all VPC subnets.
3. Access Mostly Uused Products by 50000+ Subscribers
this routing table across all subnets in your VPC.
4. Configure two routing tables, one that has a default route via the Internet gateway and another that has a default route via the VPN gateway. Associate both routing
tables with each VPC subnet.



Question : A customer has established an AWS Direct Connect connection to AWS. The link is up and routes are being advertised from the customer's end, however the customer is
unable to connect from EC2 instances inside its VPC to servers residing in its datacenter. Which of the following options provide a viable solution to remedy this situation?

(Choose 2 answers)
A. Add a route to the route table with an iPsec VPN connection as the target.
B. Enable route propagation to the virtual pinnate gateway (VGW).
C. Enable route propagation to the customer gateway (CGW).
D. Modify the route table of all Instances using the 'route' command.
E. Modify the Instances VPC subnet route table by adding a route back to the customer's on-premises environment.
  : A customer has established an AWS Direct Connect connection to AWS. The link is up and routes are being advertised from the customer's end, however the customer is
1. A,B
2. B,C
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,E
5. B,E




Question : You are designing an SSL/TLS solution that requires HTTPS clients to be authenticated by the Web server using client certificate authentication. The solution must be
resilient. Which of the following options would you consider for configuring the web server infrastructure? (Choose 2 answers)

A. Configure ELB with TCP listeners on TCP/443. And place the Web servers behind it.
B. Configure your Web servers with EIPS. Place the Web servers in a Route53 Record Set and configure health checks against all Web servers.
C. Configure ELB with HTTPS listeners, and place the Web servers behind it.
D. Configure your web servers as the origins for a CloudFront distribution. Use custom SSL certificates on your CloudFront distribution.


  : You are designing an SSL/TLS solution that requires HTTPS clients to be authenticated by the Web server using client certificate authentication. The solution must be
1. A,B
2. B,C
3. Access Mostly Uused Products by 50000+ Subscribers
4. B,D
5. A,B