Premium

AWS Certified Solutions Architect - Professional Questions and Answers (Dumps and Practice Questions)



Question : QuickTechie.com has deployed a multi-tier web application that relies on DynamoDB in a single region. For regulatory reasons they need disaster recovery
capability. In a separate region with a Recovery Time Objective of 2 hours and a Recovery Point Objective of 24 hours. They should synchronize their data on a regular basis and be
able to provision the web application rapidly using CloudFormation. The objective is to minimize changes to the existing web application, control the throughput of DynamoDB used for
the synchronization of data and synchronize only the modified elements.
Which design would you choose to meet these requirements?
  : QuickTechie.com has deployed a multi-tier web application that relies on DynamoDB in a single region. For regulatory reasons they need disaster recovery
1. Use AWS data Pipeline to schedule a DynamoDB cross region copy once a day. create a Last updated attribute in your DynamoDB table that would represent the
timestamp of the last update and use it as a filter.
2. Use EMR and write a custom script to retrieve data from DynamoDB in the current region using a SCAN operation and push it to DynamoDB in the second region.
3. Access Mostly Uused Products by 50000+ Subscribers
will import data from S3 to DynamoDB in the other region.
4. Send also each update into an SQS queue in the second region; use an auto-scaiing group behind the SQS queue to replay the write in the second region.

Correct Answer : Exp : As looking at the Question RTO is (Within hrs , you should be able to recover) and RPO is hours (It is ok , if we loose data uploaded in last
hours). Only modified data needs to be synchronized hence, we need to able to copy incremental data. Amazon anouncement as below
We are excited to announce the availability of "DynamoDB Cross-Region Copy" feature in AWS Data Pipeline service. DynamoDB Cross-Region Copy enables you to configure periodic copy of
DynamoDB table data from one AWS region to a DynamoDB table in another region (or to a different table in the same region). Using this feature can enable you to deliver applications
from other AWS regions using the same data, as well as enabling you to create a copy of your data in another region for disaster recovery purposes.

To get started with this feature, from the AWS Data Pipeline console choose the "Cross Region DynamoDB Copy" Data Pipeline template and select the source and destination DynamoDB
tables you want to copy from and to. You can also choose whether you want to perform an incremental or full copy of a table. Specify the time you want the first copy to start and the
frequency of the copy, and your scheduled copy will be ready to go. Hence option 2,3 and 4 is out. So option 1 should be correct.



Question : QuickTechie.com is serving on-demand training videos to your workforce. Videos are uploaded monthly in high resolution MP format. Your workforce is distributed globally
often on the move and using company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video. Your company has no video transcoding expertise and it
required, you may need to pay for a consultant. How do you implement the most cost-efficient architecture without compromising high availability and quality of video delivery?
  : QuickTechie.com has deployed a multi-tier web application that relies on DynamoDB in a single region. For regulatory reasons they need disaster recovery
1. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. S3 to host videos with Lifecycle Management to archive original files to Glacier after a
few days. CloudFront to serve HLS transcoded videos from S3
2. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number or nodes depending on the length of the queue S3 to
host videos with Lifecycle Management to archive all files to Glacier after a few days. CloudFront to serve HLS transcoding videos from Glacier
3. Access Mostly Uused Products by 50000+ Subscribers
after a few days. CioudFront to serve HLS transcoded videos from EC2.
4. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. EBS
volumes to host videos and EBS snapshots to incrementally backup original files after a few days. CloudFront to serve HLS transcoded videos from EC2

Correct Answer : Get Lastest Questions and Answer :

Explanation: Seamless Delivery
Using Amazon Elastic Transcoder, Amazon S3 and Amazon CloudFront, you can store, transcode and deliver your content. By setting the S3 permissions for your CloudFront distribution in
Amazon Elastic Transcoder, it is now a simple one step process to transcode content with Amazon Elastic Transcoder and deliver the multiple output videos via progressive download or
HLS streaming with CloudFront.

Amazon Elastic Transcoder is media transcoding in the cloud. It is designed to be a highly scalable, easy to use and a cost effective way for developers and businesses to convert (or
"transcode") media files from their source format into versions that will playback on devices like smartphones, tablets and PCs.

You can now easily generate protected HLS streams with Amazon Elastic Transcoder and deliver them with Amazon CloudFront. With content protection for HLS, Elastic Transcoder uses
encryption keys supplied by you, or generates keys on your behalf. Both methods use the AWS Key Management Service to protect the security of your keys.

There are no additional Amazon Elastic Transcoder or Amazon CloudFront charges for using these new encryption options. Standard AWS Key Management Service charges apply.

Elastically Scalable
Amazon Elastic Transcoder is designed to scale seamlessly with your media transcoding workload. Amazon Elastic Transcoder is architected to handle large volumes of media files and
large file sizes. Transcoding pipelines enable you to perform multiple transcodes in parallel. Amazon Elastic Transcoder leverages other Amazon Web Services like Amazon S3, Amazon
EC2, Amazon DynamoDB, Amazon Simple Workflow (SWF) and Amazon Simple Notification Service (SNS) to give scalability and reliability.

AWS Integration
Amazon Elastic Transcoder provides an important media building block for creating end-to-end media solutions on AWS. For example, you can use Amazon Glacier to store master content,
Amazon Elastic Transcoder to transcode masters to renditions for distribution stored in Amazon S3, and stream these renditions at scale over the Internet using Amazon CloudFront.







Question : Your QuickTechie INC wants to implement an order fulfillment process for selling a personalized gadget that needs an average of - days to produce with some orders
taking up
to 6 months. You expect 10 orders per day on your first day. 1000 orders per day after 6 months and 10,000 orders after 12 months. Orders coming in are checked for consistency then
dispatched to your manufacturing plant for production quality control packaging shipment and payment processing. If the product does not meet the quality standards at any stage of
the process employees may force the process to repeat a step. Customers are notified via email about order status and any critical issues with their orders such as payment failure.

Your case architecture includes AWS Elastic Beanstalk for your website with an RDS MySQL instance for customer data and orders.
How can you implement the order fulfillment process while making sure that the emails are delivered reliably?

  : Your QuickTechie INC wants to implement an order fulfillment process for selling a personalized gadget that needs an average of - days to produce with some orders
1. Add a business process management application to your Elastic Beanstalk app servers and re-use the RDS database for tracking order status use one of the Elastic
Beanstalk instances to send emails to customers.
2. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1. Use the decider instance to send emails
to customers.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Use an SQS queue to manage all process tasks. Use an Auto Scaling group of EC2 Instances that poll the tasks and execute them. Use SES to send emails to customers.

Correct Answer : Get Lastest Questions and Answer :


Explanation: For order processing we will definitely use SWF, hence option 1 and 4 gone. and Option 2 and 3 are same except email service. However, SES is the best solution for sending
email. So answer 3 should be correct.
Inside the Decider
Your Decider code simply polls Simple Workflow asking for decisions to be made, and then decides on the next step. Your code has access to all of the information it needs to make a
decision including the type of the workflow and a detailed history of the prior steps taken in the workflow. The Decider can also annotate the workflow with additional data.





Question : An AWS customer runs a public blogging website. The site users upload two million blog entries a month. The average blog entry size is KB. The access rate to blog
entries drops to negligible 6 months after publication and users rarely access a blog entry 1 year after publication. Additionally, blog entries have a high update rate during the
first 3 months following publication, this drops to no updates after 6 months. The customer wants to use CloudFront to improve his user's load times. Which of the following
recommendations would you make to the customer?
  : An AWS customer runs a public blogging website. The site users upload two million blog entries a month. The average blog entry size is  KB. The access rate to blog
1. Duplicate entries into two different buckets and create two separate CloudFront distributions where S3 access is restricted only to Cloud Front identity
2. Create a CloudFront distribution with US/Europe price class for US/Europe users and a different CloudFront distribution with All Edge Locations for the remaining
users.
3. Access Mostly Uused Products by 50000+ Subscribers
was uploaded to be used with CloudFront behaviors.
4. Create a CloudFront distribution with Restrict Viewer Access Forward Query string set to true and minimum TTL of 0.

Correct Answer : Get Lastest Questions and Answer :


Explanation: All options are talking about CloudFront and S3 , hence questions is regarding Get request.
If your workload is mainly sending GET requests, you should consider using Amazon CloudFront for performance optimization.
Integrating Amazon CloudFront with Amazon S3, you can distribute content to your users with low latency and a high data transfer rate. You will also send fewer direct requests to
Amazon S3, which will reduce your costs.
For example, suppose you have a few objects that are very popular. Amazon CloudFront will fetch those objects from Amazon S3 and cache them. Amazon CloudFront can then serve future
requests for the objects from its cache, reducing the number of GET requests it sends to Amazon S3.

The most obvious optimization when reading objects from S3 is using AWS CloudFront. Being a CDN service it plays the role of a local cache for your users and brings S3 objects to 50+
edge locations around the world, providing low latencies and high transfer rates for read operations due to proximity of the data. Using CloudFront reduces the number of GET requests
hitting the original bucket, reducing in turn the cost and the load incurred.
Also, both S3 and CloudFront support partial requests for an object (Range GETs), when it's downloaded in parallel smaller units, similar to multipart uploads. Most download managers
do exactly that to minimize the time it takes to fetch a file.
An alternative optimization for efficient S3 reads would be utilizing S3's BitTorrent support. This way all downloading clients share the content among themselves rather than
requesting an S3 bucket for the same object over and over again. Similarly to CloudFront, this reduces the S3 load and the costs involved as significantly less data is transferred
from an S3 bucket out to the Internet by the end of the day. One important thing to note, though - BitTorrent only works for S3 objects smaller than 5Gb.
As you see, either CloudFront or BitTorrent is a "must have" technique if you're about to publicly distribute S3 objects. Giving out a bare S3 download link is not the way to go for
potentially popular content, as it can only go so far when the load increases.




Related Questions


Question : QuickTechie.com is planning to host a web server apache tomcat as well as a JEE app server weblogic on a single EC instance which
is a part of the public subnet of a VPC. How can QuickTechie setup to have two separate public IPs and separate security groups for
both the Weblogic as well as the tomcat servers?
  :  QuickTechie.com is planning to host a web server apache tomcat as well as a JEE app server weblogic on a single EC instance which
1. Launch a VPC with ELB such that it redirects requests to separate VPC instances of the public subnet.
2. Launch a VPC instance with two network interfaces. Assign a separate security group and elastic IP to them
3. Access Mostly Uused Products by 50000+ Subscribers
4. Launch a VPC instance with two network interfaces. Assign a separate security group to each and AWS will assign a separate public IP to them.



Question : Map the following storage and its characteristics

A. Amazon EBS
B. Amazon EC2 Instance Store
C. Amazon S3
D. Root Storage

1. recommended storage option when you run a database on an instance
2. if you stop or terminate an instance, any data stored on volumes is lost
3. Access Mostly Uused Products by 50000+ Subscribers
4. contains all the information necessary to boot the instance
  :   Map the following storage and its characteristics
1. A-1 , B-2, C-3, D-4
2. A-2 , B-1, C-4, D-3
3. Access Mostly Uused Products by 50000+ Subscribers
4. A-3 , B-4, C-1, D-2
5. A-1 , B-2, C-4, D-3



Question : Select the correct statement for EBS

  :  Select the correct statement for EBS
1. You can create Amazon EBS volumes from 1 GiB to 1 TiB in size. You can mount these volumes as devices on your Amazon EC2 instances.
2. You can create point-in-time snapshots of Amazon EBS volumes, which are persisted to Amazon S3.
3. Access Mostly Uused Products by 50000+ Subscribers
4. 1 and 2
4. 2 and 3



Question : QuickTechie.com has an EC instance on which this website is running, but they realized that this instance is not good enough for heavy workload.
Hence they decided to upgrade EC2 instance, however, the attached volume (EBS) they do not want to loose so created a snapshot of the volume, after upgrading
the EC2 instance they attached the previous snapshot of EBS volume. Select the correct statemen in this scenerio...



  :  QuickTechie.com has an EC instance on which this website is running, but they realized that this instance is not good enough for heavy workload.
1. New volumes created from existing Amazon S3 snapshots load lazily in the background.
2. New volumes created from existing Amazon S3 snapshots loaded first before starting new instance.
3. Access Mostly Uused Products by 50000+ Subscribers
4. New volumes created from existing Amazon S3 snapshots first needs to be decrypted first and then load lazily in the background.



Question : QuickTechie.com has an EC instance on which this website is running, but they realized that this instance is not good enough for heavy workload.
Hence they decided to upgrade EC2 instance, however, the attached volume (EBS) they do not want to loose so created a snapshot of the volume, after upgrading
the EC2 instance they attached the previous snapshot of EBS volume. As this is the production snapshot, select the correct statement for this.


  :  QuickTechie.com has an EC instance on which this website is running, but they realized that this instance is not good enough for heavy workload.
1. To avoid the possibility of increased read or write latency on a production workload, you should first access all of the blocks on the volume to ensure optimal
performance
2. To avoid the possibility of increased read or write latency on a production workload, you should wait all the data to be downloaded and then start EC2 instance
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the above



Question : You are using EBS volume for storing your website data, but you realized that the volume is about to fill. How can you increase the volume size.
  : You are using EBS volume for storing your website data, but you realized that the volume is about to fill. How can you increase the volume size.
1. Requesting AWS with the volume id
2. You can change volume size from AWS console
3. Access Mostly Uused Products by 50000+ Subscribers
4. You can expand the storage space of an Amazon EBS volume by migrating your data to a larger volume and then extending the file system on the volume to recognize the
newly-available space.

5. None of the above