Premium

AWS Certified Solutions Architect - Professional Questions and Answers (Dumps and Practice Questions)



Question : A -tier e-commerce web application is current deployed on-premises and will be migrated to AWS for greater scalability and elasticity. The web server currently shares
read-only data using a network distributed file system. The app server tier uses a clustering mechanism for discovery and shared session state that depends on IP multicast. The
database tier uses shared-storage clustering to provide database fail over capability, and uses several read slaves for scaling. Data on all servers and the distributed file system
directory is backed up weekly to off-site tapes.

Which AWS storage and database architecture meets the requirements of the application?

  : A -tier e-commerce web application is current deployed on-premises and will be migrated to AWS for greater scalability and elasticity. The web server currently shares
1. Web servers, store read-only data in S3, and copy from S3 to root volume at boot time. App servers share state using a combination of DynamoDB and IP unicast.
Database use RDS with multi-AZ deployment and one or more Read Replicas. Backup web and app servers backed up weekly via AMIS. Database backed up via DB snapshots.
2. Web servers, store-read-only data in S3, and copy from S3 to root volume at boot time. App servers share state using a combination of DynamoDB and IP unicast
Database use RDS with multi-AZ deployment and one or more read replicas. Backup web servers app servers, and database backed up weekly to Glacier using snapshots.
3. Access Mostly Uused Products by 50000+ Subscribers
Database use RDS with multi-AZ deployment. Backup web and app servers backed up weekly via AMIs. Database backed up via DB snapshots.
4. Web servers, store read-only data in an EC2 NFS server, mount to each web server at boot time. App servers share state using a combination of DynamoDB and IP
multicast. Database use RDS with multl-AZ deployment and one or more Read Replicas. Backup web and app servers backed up weekly via AMI. Database backed up via DB snapshots


Answer: 1

Explanation: Static read only content can be backed in S3. (Hence option 4 is out)
Database uses read slaves, which can be replced by read replica in RDS (Hence option 3 is out, which does not talk about Read Replica)
There are two ways to backup RDS database
Automated Backup

Automated backup is an Amazon RDS feature that automatically creates a backup of your database. Automated backups are enabled by default for a new DB instance.

An automated backup occurs during a daily user-configurable period of time known as the preferred backup window. Backups created during the backup window are retained for a
user-configurable number of days (the backup retention period). Note that if the backup requires more time than allotted to the backup window, the backup will continue to completion.

DB Snapshots

DB snapshots are user-initiated and enable you to back up your DB instance in a known state as frequently as you wish, and then restore to that specific state at any time. DB
snapshots can be created with the Amazon RDS console or the CreateDBSnapshot action in the Amazon RDS API. DB snapshots are kept until you explicitly delete them with the Amazon RDS
console or the DeleteDBSnapshot action in the Amazon RDS API.

Hence option 1 is correct.







Question : Your customer wishes to deploy an enterprise application to AWS which will consist of several web servers, several application servers and a small (GB) Oracle
database information is stored, both in the database and the file systems of the various servers. The backup system must support database recovery whole server and whole disk
restores, and individual file restores with a recovery time of no more than two hours. They have chosen to use RDS Oracle as the database. Which backup architecture will meet these
requirements?
  : Your customer wishes to deploy an enterprise application to AWS which will consist of several web servers, several application servers and a small (GB) Oracle
1. Backup RDS using automated daily DB backups. Backup the EC2 instances using AMIs and supplement with file-level backup to S3 using traditional enterprise backup
software to provide file level restore.
2. Backup RDS using a Multi-AZ Deployment Backup the EC2 instances using Amis, and supplement by copying file system data to S3 to provide file level restore.
3. Access Mostly Uused Products by 50000+ Subscribers
traditional enterprise backup software to provide file level restore.
4. Backup RDS database to S3 using Oracle RMAN Backup, the EC2 instances using Amis, and supplement with EBS snapshots for individual volume restore.

Answer: 1

Explanation: As recovery should be less than 2 hours, hence Glacier is not a good solution. (Option 3 is out). It is always a good idea to create backups of your working instance
configurations by bundling a custom AMI after any modifications. Oracle Backup and Recovery : Automated Backups - Turned on by default, the automated backup feature of Amazon RDS
enables point-in-time recovery for your DB Instance. Amazon RDS will backup your database and transaction logs and store both for a user-specified retention period. This allows you
to restore your DB Instance to any second during your retention period, up to the last five minutes. Your automatic backup retention period can be configured to up to thirty five
days. DB Snapshots - DB Snapshots are user-initiated backups of your DB Instance. These full database backups will be stored by Amazon RDS until you explicitly delete them. You can
create a new DB Instance from a DB Snapshot whenever you desire.






Question : You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of
around 100 sensors for 3 months. Each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS. During the pilot, you measured a peak or 10 IOPS on the database,
and you stored an average of 3GB of sensor data per month in the database. The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a
PostgreSQL RDS database with 500GB standard storage. The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan
requires a deployment of at least 1O0K sensors which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over
year Improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup win meet the requirements?

  : You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of
1. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance
2. Ingest data into a DynamoDB table and move old data to a Redshift cluster
3. Access Mostly Uused Products by 50000+ Subscribers
4. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS

Answer: 2
Explanation: 3GB data by 100 Sensor per month. If we will have 100K sensor than 3000GB data per month. For so much volume of data RDS in not a good option and also it is good for
structure data. Amazon DynamoDB is ideal for existing or new applications that need a flexible NoSQL database with low read and write
latencies, and the ability to scale storage and throughput up or down as needed without code changes or downtime.
Common use cases include: mobile apps, gaming, digital ad serving, live voting and audience interaction for live events, sensor networks, log ingestion, access control for web-based
content, metadata storage for Amazon S3 objects, ecommerce shopping carts, and web session management. Many of these use cases require a highly available and scalable database
because downtime or performance degradation has an immediate negative impact on an organization's business. Amazon DynamoDB uses SSD drives and is optimized for workloads with a
high I/O rate per GB stored. If you plan to store very large amounts of data that are infrequently accessed, other storage options, such as Amazon S3, may be a better choice.

Amazon Redshift is ideal for analyzing large datasets using your existing business intelligence tools. Organizations are
using Amazon Redshift to do the following:
. Analyze global sales data for multiple products
. Store historical stock trade data
. Analyze ad impressions and clicks
. Aggregate gaming data
. Analyze social trends
. Measure clinical quality, operation efficiency, and financial performance in the health care space




Related Questions


Question : A user has launched an EBS backed EC instance in the US-East-a region. The user stopped the instance and started it back after days.
AWS throws up an `InsufficientInstanceCapacity' error. What can be the possible reason for this?

 : A user has launched an EBS backed EC instance in the US-East-a region. The user stopped the instance and started it back after  days.
1. AWS does not have sufficient capacity in that availability zone
2. AWS zone mapping is changed for that user account
3. Access Mostly Uused Products by 50000+ Subscribers
4. The user account has reached the maximum EC2 instance limit



Question : An organization has created IAM users. The organization wants each of the IAM users to have access to a separate DyanmoDB table. All the
users are added to the same group and the organization wants to setup a group level policy for this. How can the organization achieve this?

 : An organization has created  IAM users. The organization wants each of the IAM users to have access to a separate DyanmoDB table. All the
1. Define the group policy and add a condition which allows the access based on the IAM name
2. Create a DynamoDB table with the same name as the IAM user name and define the policy rule which grants access based on the DynamoDB ARN using a variable
3. Access Mostly Uused Products by 50000+ Subscribers
4. It is not possible to have a group level policy which allows different IAM users to different DynamoDB Tables





Question : QuickTechie.com is currently runs several FTP servers that their customers use to upload and download large video files. They wish to move this system to AWS
to make it more scalable, but they wish to maintain customer privacy and Keep costs to a minimum. What AWS architecture would you recommend?
 :  QuickTechie.com is currently runs several FTP servers that their  customers use to upload and download large video files. They wish to move this system to AWS
1. Ask their customers to use an S3 client instead of an FTP client. Create a single S3 bucket. Create an IAM user for each customer. Put the IAM Users in a Group
that has an IAM policy that permits access to sub-directories within the bucket via use of the 'username' Policy variable.
2. Create a single S3 bucket with Reduced Redundancy Storage turned on and ask their customers to use an S3 client instead of an FTP client. Create a bucket for
each customer with a Bucket Policy that permits access only to that one customer.
3. Access Mostly Uused Products by 50000+ Subscribers
threshold. Load a central list of ftp users from S3 as part of the user Data startup script on each Instance.
4. Create a single S3 bucket with Requester Pays turned on and ask their customers to use an S3 client instead of an FTP client. Create a bucket for each customer
with a Bucket Policy that permits access only to that one customer.



Question : QuickTechie.com has deployed a multi-tier web application that relies on DynamoDB in a single region. For regulatory reasons they need disaster recovery
capability. In a separate region with a Recovery Time Objective of 2 hours and a Recovery Point Objective of 24 hours. They should synchronize their data on a regular basis and be
able to provision the web application rapidly using CloudFormation. The objective is to minimize changes to the existing web application, control the throughput of DynamoDB used for
the synchronization of data and synchronize only the modified elements.
Which design would you choose to meet these requirements?
  : QuickTechie.com has deployed a multi-tier web application that relies on DynamoDB in a single region. For regulatory reasons they need disaster recovery
1. Use AWS data Pipeline to schedule a DynamoDB cross region copy once a day. create a Last updated attribute in your DynamoDB table that would represent the
timestamp of the last update and use it as a filter.
2. Use EMR and write a custom script to retrieve data from DynamoDB in the current region using a SCAN operation and push it to DynamoDB in the second region.
3. Access Mostly Uused Products by 50000+ Subscribers
will import data from S3 to DynamoDB in the other region.
4. Send also each update into an SQS queue in the second region; use an auto-scaiing group behind the SQS queue to replay the write in the second region.

Correct Answer : Exp : As looking at the Question RTO is (Within hrs , you should be able to recover) and RPO is hours (It is ok , if we loose data uploaded in last
hours). Only modified data needs to be synchronized hence, we need to able to copy incremental data. Amazon anouncement as below
We are excited to announce the availability of "DynamoDB Cross-Region Copy" feature in AWS Data Pipeline service. DynamoDB Cross-Region Copy enables you to configure periodic copy of
DynamoDB table data from one AWS region to a DynamoDB table in another region (or to a different table in the same region). Using this feature can enable you to deliver applications
from other AWS regions using the same data, as well as enabling you to create a copy of your data in another region for disaster recovery purposes.

To get started with this feature, from the AWS Data Pipeline console choose the "Cross Region DynamoDB Copy" Data Pipeline template and select the source and destination DynamoDB
tables you want to copy from and to. You can also choose whether you want to perform an incremental or full copy of a table. Specify the time you want the first copy to start and the
frequency of the copy, and your scheduled copy will be ready to go. Hence option 2,3 and 4 is out. So option 1 should be correct.



Question : QuickTechie.com is serving on-demand training videos to your workforce. Videos are uploaded monthly in high resolution MP format. Your workforce is distributed globally
often on the move and using company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video. Your company has no video transcoding expertise and it
required, you may need to pay for a consultant. How do you implement the most cost-efficient architecture without compromising high availability and quality of video delivery?
  : QuickTechie.com has deployed a multi-tier web application that relies on DynamoDB in a single region. For regulatory reasons they need disaster recovery
1. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. S3 to host videos with Lifecycle Management to archive original files to Glacier after a
few days. CloudFront to serve HLS transcoded videos from S3
2. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number or nodes depending on the length of the queue S3 to
host videos with Lifecycle Management to archive all files to Glacier after a few days. CloudFront to serve HLS transcoding videos from Glacier
3. Access Mostly Uused Products by 50000+ Subscribers
after a few days. CioudFront to serve HLS transcoded videos from EC2.
4. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. EBS
volumes to host videos and EBS snapshots to incrementally backup original files after a few days. CloudFront to serve HLS transcoded videos from EC2


Question : Your QuickTechie INC wants to implement an order fulfillment process for selling a personalized gadget that needs an average of - days to produce with some orders
taking up
to 6 months. You expect 10 orders per day on your first day. 1000 orders per day after 6 months and 10,000 orders after 12 months. Orders coming in are checked for consistency then
dispatched to your manufacturing plant for production quality control packaging shipment and payment processing. If the product does not meet the quality standards at any stage of
the process employees may force the process to repeat a step. Customers are notified via email about order status and any critical issues with their orders such as payment failure.

Your case architecture includes AWS Elastic Beanstalk for your website with an RDS MySQL instance for customer data and orders.
How can you implement the order fulfillment process while making sure that the emails are delivered reliably?

  : Your QuickTechie INC wants to implement an order fulfillment process for selling a personalized gadget that needs an average of - days to produce with some orders
1. Add a business process management application to your Elastic Beanstalk app servers and re-use the RDS database for tracking order status use one of the Elastic
Beanstalk instances to send emails to customers.
2. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1. Use the decider instance to send emails
to customers.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Use an SQS queue to manage all process tasks. Use an Auto Scaling group of EC2 Instances that poll the tasks and execute them. Use SES to send emails to customers.


Question : An AWS customer runs a public blogging website. The site users upload two million blog entries a month. The average blog entry size is KB. The access rate to blog
entries drops to negligible 6 months after publication and users rarely access a blog entry 1 year after publication. Additionally, blog entries have a high update rate during the
first 3 months following publication, this drops to no updates after 6 months. The customer wants to use CloudFront to improve his user's load times. Which of the following
recommendations would you make to the customer?
  : An AWS customer runs a public blogging website. The site users upload two million blog entries a month. The average blog entry size is  KB. The access rate to blog
1. Duplicate entries into two different buckets and create two separate CloudFront distributions where S3 access is restricted only to Cloud Front identity
2. Create a CloudFront distribution with US/Europe price class for US/Europe users and a different CloudFront distribution with All Edge Locations for the remaining
users.
3. Access Mostly Uused Products by 50000+ Subscribers
was uploaded to be used with CloudFront behaviors.
4. Create a CloudFront distribution with Restrict Viewer Access Forward Query string set to true and minimum TTL of 0.