Premium

AWS Certified Solutions Architect - Professional Questions and Answers (Dumps and Practice Questions)



Question : You are the new IT architect in a company that operates a mobile sleep tracking application
When activated at night, the mobile app is sending collected data points of 1 kilobyte every 5 minutes to your backend
The backend takes care of authenticating the user and writing the data points into an Amazon DynamoDB table.
Every morning, you scan the table to extract and aggregate last night's data on a per user basis, and store the results in Amazon S3.
Users are notified via Amazon SMS mobile push notifications that new data is available, which is parsed and visualized by (he mobile app Currently you have around 100k users
who are mostly based out of North America. You have been tasked to optimize the architecture of the backend system to lower cost

what would you recommend? (Choose 2 answers)

A. Create a new Amazon DynamoDB Table each day and drop the one for the previous day after its data is on Amazon S3.
B. Have the mobile app access Amazon DynamoDB directly instead of JSON files stored on Amazon S3.
C. Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput.
D. Introduce Amazon Elasticache lo cache reads from the Amazon DynamoDB table and reduce provisioned read throughput.
E. Write data directly into an Amazon Redshift cluster replacing both Amazon DynamoDB and Amazon S3.


  : You are the new IT architect in a company that operates a mobile sleep tracking application
1. A,B
2. B,C
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,C


Answer: 4
Explanation: Once data is available in S3 then why to keep in DynamoDB , delete the same to reduce storage cost.
If we have low write Provisioned then cost be lower. So using SQS , we can store messages/data in SQS and then write in the DynamoDB with low provisioned IOPS.

And then we aggreagate the data from DynamoDB.






Question : A benefits enrollment company is hosting a -tier web application running in a VPC on
AWS which includes a NAT (Network Address Translation) instance in the public Web tier.
There is enough provisioned capacity for the expected workload tor the new fiscal year
benefit enrollment period plus some extra overhead Enrollment proceeds nicely for two
days and then the web tier becomes unresponsive, upon investigation using CloudWatch
and other monitoring tools it is discovered that there is an extremely large and
unanticipated amount of inbound traffic coming from a set of 15 specific IP addresses over
port 80 from a country where the benefits company has no customers. The web tier
instances are so overloaded that benefit enrollment administrators cannot even SSH into
them. Which activity would be useful in defending against this attack?
  : A benefits enrollment company is hosting a -tier web application running in a VPC on
1. Create a custom route table associated with the web tier and block the attacking IP
addresses from the IGW (internet Gateway)
2. Change the EIP (Elastic IP Address) of the NAT instance in the web tier subnet and
update the Main Route Table with the new EIP
3. Access Mostly Uused Products by 50000+ Subscribers

4. Create an inbound NACL (Network Access control list) associated with the web tier
subnet with deny rules to block the attacking IP addresses


Answer: 4

Explanation: A network access control list (ACL) is an optional layer of security that acts as a firewall for controlling traffic in and out of a subnet. You might set up network ACLs with
rules similar to your security groups in order to add an additional layer of security to your VPC. You can configure Deny rule for specific Ip or range of Ips in NACL.




Question : You have launched an EC instance with four () GB EBS Provisioned IOPS volumes attached. The EC Instance is EBS-Optimized and supports Mbps throughput between
EC2 and EBS. The two EBS volumes are configured as a single RAID 0 device, and each Provisioned IOPS volume is provisioned with 4.000 IOPS (4.000 16KB reads or writes) for a total
of 16.000 random IOPS on the instance. The EC2 Instance initially delivers the expected 16.000 IOPS random read and write performance. Sometime later in order to increase the total
random I/O performance of the instance, you add an additional two 500 GB EBS Provisioned IOPS volumes to the RAID. Each volume Is provisioned to 4.000 lOPs like the original four for
a total of 24.000 IOPS on the EC2 instance. Monitoring shows that the EC2 instance CPU utilization increased from 50% to 70%. but the total random IOPS measured at the instance level
does not increase at all. What is the problem and a valid solution?
  : You have launched an EC instance with four ()  GB EBS Provisioned IOPS volumes attached. The EC Instance is EBS-Optimized and supports  Mbps throughput between
1. Larger storage volumes support higher Provisioned IOPS rates: increase the provisioned volume storage of each of the 6 EBS volumes to 1TB.
2. The EBS-Optimized throughput limits the total IOPS that can be utilized use an EBS Optimized instance that provides larger throughput.
3. Access Mostly Uused Products by 50000+ Subscribers
4. RAID 0 only scales linearly to about 4 devices, use RAID 0 with 4 EBS Provisioned IOPS volumes but increase each Provisioned IOPS EBS volume to 6.000 IOPS
5. The standard EBS instance root volume limits the total IOPS rate, change the instant root volume to also be a 500GB 4.000 Provisioned IOPS volume.


Answer: 1

Explanation: Amazon states that larger EC2 instances have "higher" I/O performance. the dedicated throughput to Amazon EBS, the maximum amount of IOPS the instance can support if you are
using a 16 KB I/O size, and the approximate maximum bandwidth available on that connection in MB/s. Choose an EBS-optimized instance that provides more dedicated EBS throughput than
your application needs; otherwise, the connection between Amazon EBS and Amazon EC2 can become a performance bottleneck.

Note that some instance types are EBS-optimized by default. For instances that are EBS-optimized by default, there is no need to enable EBS optimization and there is no effect if you
disable EBS optimization using the CLI or API. You can enable EBS optimization for the other instance types that support EBS optimization when you launch the instances, or enable EBS
optimization after the instances are running.




Related Questions


Question : Your company hosts a social media site supporting users in multiple countries. You have
been asked to provide a highly available design for the application that leverages multiple
regions for the most recently accessed content and latency sensitive portions of the website.
The most latency sensitive component of the application involves reading user
preferences to support web site personalization and ad selection.
In addition to running your application in multiple regions, which option will support this
application's requirements?



 :  Your company hosts a social media site supporting users in multiple countries. You have
1. Serve user content from S3. CloudFront and use Route53 latency-based routing
between ELBs in each region Retrieve user preferences from a local DynamoDB table in
each region and leverage SQS to capture changes to user preferences with SOS workers
for propagating updates to each table.
2. Use the S3 Copy API to copy recently accessed content to multiple regions and serve
user content from S3. CloudFront with dynamic content and an ELB in each region
Retrieve user preferences from an ElasticCache cluster in each region and leverage SNS
notifications to propagate user preference changes to a worker node in each region.
3. Access Mostly Uused Products by 50000+ Subscribers
user content from S3 CloudFront and Route53 latency-based routing Between ELBs In
each region Retrieve user preferences from a DynamoDB table and leverage SQS to
capture changes to user preferences with SOS workers for propagating DynamoDB
updates.
4. Serve user content from S3. CloudFront with dynamic content, and an ELB in each
region Retrieve user preferences from an ElastiCache cluster in each region and leverage
Simple Workflow (SWF) to manage the propagation of user preferences from a centralized
DB to each ElastiCache cluster.


Question : Your company has HQ in Tokyo and branch offices all over the world and is using a
logistics software with a multi-regional deployment on AWS in Japan, Europe and USA.
The logistic software has a 3-tier architecture and currently uses MySQL 5.6 for data
persistence. Each region has deployed its own database
In the HQ region you run an hourly batch process reading data from every region to
compute cross-regional reports that are sent by email to all offices this batch process must
be completed as fast as possible to quickly optimize logistics how do you build the
database architecture in order to meet the requirements'?
 :   Your company has HQ in Tokyo and branch offices all over the world and is using a
1. For each regional deployment, use RDS MySQL with a master in the region and a read
replica in the HQ region
2. For each regional deployment, use MySQL on EC2 with a master in the region and send
hourly EBS snapshots to the HQ region
3. Access Mostly Uused Products by 50000+ Subscribers
hourly RDS snapshots to the HQ region
4. For each regional deployment, use MySQL on EC2 with a master in the region and use
S3 to copy data files hourly to the HQ region
5. Use Direct Connect to connect all regional MySQL deployments to the HQ region and
reduce network latency for the batch process


Question : You need a persistent and durable storage to trace call activity of an IVR (Interactive Voice
Response) system. Call duration is mostly in the 2-3 minutes timeframe. Each traced call
can be either active or terminated. An external application needs to know each minute the
list of currently active calls, which are usually a few calls/second. Put once per month there
is a periodic peak up to 1000 calls/second for a few hours. The system is open 24/7 and
any downtime should be avoided. Historical data is periodically archived to files. Cost
saving is a priority for this project.
What database implementation would better fit this scenario, keeping costs as low as
possible?



  : You need a persistent and durable storage to trace call activity of an IVR (Interactive Voice
1. Use RDS Multi-AZ with two tables, one for "Active calls" and one for "Terminated calls".
In this way the "Active calls" table is always small and effective to access.
2. Use DynamoDB with a "Calls" table and a Global Secondary Index on a "IsActive'"
attribute that is present for active calls only. In this way the Global Secondary index is
sparse and more effective.
3. Access Mostly Uused Products by 50000+ Subscribers
that can equal to "active" or "terminated" in this way the Global Secondary index can be
used for all Items in the table.
4. Use RDS Multi-AZ with a "CALLS" table and an Indexed "STATE* field that can be equal
to 'ACTIVE" or "TERMINATED" In this way the SOL query Is optimized by the use of the
Index.



Question : You are designing a connectivity solution between on-premises infrastructure and Amazon
VPC. Your server's on-premises will be communicating with your VPC instances You will
be establishing IPSec tunnels over the internet. You will be using VPN gateways and
terminating the IPsec tunnels on AWS-supported customer gateways.
Which of the following objectives would you achieve by implementing an IPSec tunnel as
outlined above? (Choose 4 answers)

A. End-to-end protection of data in transit
B. End-to-end Identity authentication
C. Data encryption across the Internet
D. Protection of data in transit over the Internet
E. Peer identity authentication between VPN gateway and customer gateway
F. Data integrity protection across the Internet


  : You are designing a connectivity solution between on-premises infrastructure and Amazon
1. A,B,C,D
2. B,C,D,E
3. Access Mostly Uused Products by 50000+ Subscribers
4. D,E,F,A
5. E,F,A,B



Question : You are designing an intrusion detection prevention (IDS/IPS) solution for a customer web
application in a single VPC. You are considering the options for implementing IDS IPS
protection for traffic coming from the Internet.
Which of the following options would you consider? (Choose 2 answers)

A. Implement IDS/IPS agents on each Instance running In VPC
B. Configure an instance in each subnet to switch its network interface card to promiscuous mode and analyze network traffic.
C. Implement Elastic Load Balancing with SSL listeners In front of the web applications
D. Implement a reverse proxy layer in front of web servers and configure IDS/IPS agents on each reverse proxy server.



  : You are designing an intrusion detection prevention (IDS/IPS) solution for a customer web
1. A,B
2. B,C
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,D




Question : A customer has a GB AWS Direct Connect connection to an AWS region where they
have a web application hosted on Amazon Elastic Computer Cloud (EC2). The application
has dependencies on an on-premises mainframe database that uses a BASE (Basic
Available. Sort stale Eventual consistency) rather than an ACID (Atomicity. Consistency
isolation. Durability) consistency model. The application is exhibiting undesirable behavior
because the database is not able to handle the volume of writes. How can you reduce the
load on your on-premises database resources in the most cost-effective way?

  : A customer has a  GB AWS Direct Connect connection to an AWS region where they
1. Use an Amazon Elastic Map Reduce (EMR) S3DistCp as a synchronization mechanism
between the on-premises database and a Hadoop cluster on AWS.
2. Modify the application to write to an Amazon SQS queue and develop a worker process
to flush the queue to the on-premises database.
3. Access Mostly Uused Products by 50000+ Subscribers
function to write to the on-premises database.
4. Provision an RDS read-replica database on AWS to handle the writes and synchronize
the two databases using Data Pipeline.