Question : QuickTechie.com has deployed a multi-tier web application that relies on DynamoDB in a single region. For regulatory reasons they need disaster recovery capability. In a separate region with a Recovery Time Objective of 2 hours and a Recovery Point Objective of 24 hours. They should synchronize their data on a regular basis and be able to provision the web application rapidly using CloudFormation. The objective is to minimize changes to the existing web application, control the throughput of DynamoDB used for the synchronization of data and synchronize only the modified elements. Which design would you choose to meet these requirements? 1. Use AWS data Pipeline to schedule a DynamoDB cross region copy once a day. create a Last updated attribute in your DynamoDB table that would represent the timestamp of the last update and use it as a filter. 2. Use EMR and write a custom script to retrieve data from DynamoDB in the current region using a SCAN operation and push it to DynamoDB in the second region. 3. Access Mostly Uused Products by 50000+ Subscribers will import data from S3 to DynamoDB in the other region. 4. Send also each update into an SQS queue in the second region; use an auto-scaiing group behind the SQS queue to replay the write in the second region.
Correct Answer : Exp : As looking at the Question RTO is (Within hrs , you should be able to recover) and RPO is hours (It is ok , if we loose data uploaded in last hours). Only modified data needs to be synchronized hence, we need to able to copy incremental data. Amazon anouncement as below We are excited to announce the availability of "DynamoDB Cross-Region Copy" feature in AWS Data Pipeline service. DynamoDB Cross-Region Copy enables you to configure periodic copy of DynamoDB table data from one AWS region to a DynamoDB table in another region (or to a different table in the same region). Using this feature can enable you to deliver applications from other AWS regions using the same data, as well as enabling you to create a copy of your data in another region for disaster recovery purposes.
To get started with this feature, from the AWS Data Pipeline console choose the "Cross Region DynamoDB Copy" Data Pipeline template and select the source and destination DynamoDB tables you want to copy from and to. You can also choose whether you want to perform an incremental or full copy of a table. Specify the time you want the first copy to start and the frequency of the copy, and your scheduled copy will be ready to go. Hence option 2,3 and 4 is out. So option 1 should be correct.
Question : QuickTechie.com is serving on-demand training videos to your workforce. Videos are uploaded monthly in high resolution MP format. Your workforce is distributed globally often on the move and using company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video. Your company has no video transcoding expertise and it required, you may need to pay for a consultant. How do you implement the most cost-efficient architecture without compromising high availability and quality of video delivery? 1. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. S3 to host videos with Lifecycle Management to archive original files to Glacier after a few days. CloudFront to serve HLS transcoded videos from S3 2. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number or nodes depending on the length of the queue S3 to host videos with Lifecycle Management to archive all files to Glacier after a few days. CloudFront to serve HLS transcoding videos from Glacier 3. Access Mostly Uused Products by 50000+ Subscribers after a few days. CioudFront to serve HLS transcoded videos from EC2. 4. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. CloudFront to serve HLS transcoded videos from EC2
Explanation: Seamless Delivery Using Amazon Elastic Transcoder, Amazon S3 and Amazon CloudFront, you can store, transcode and deliver your content. By setting the S3 permissions for your CloudFront distribution in Amazon Elastic Transcoder, it is now a simple one step process to transcode content with Amazon Elastic Transcoder and deliver the multiple output videos via progressive download or HLS streaming with CloudFront.
Amazon Elastic Transcoder is media transcoding in the cloud. It is designed to be a highly scalable, easy to use and a cost effective way for developers and businesses to convert (or "transcode") media files from their source format into versions that will playback on devices like smartphones, tablets and PCs.
You can now easily generate protected HLS streams with Amazon Elastic Transcoder and deliver them with Amazon CloudFront. With content protection for HLS, Elastic Transcoder uses encryption keys supplied by you, or generates keys on your behalf. Both methods use the AWS Key Management Service to protect the security of your keys.
There are no additional Amazon Elastic Transcoder or Amazon CloudFront charges for using these new encryption options. Standard AWS Key Management Service charges apply.
Elastically Scalable Amazon Elastic Transcoder is designed to scale seamlessly with your media transcoding workload. Amazon Elastic Transcoder is architected to handle large volumes of media files and large file sizes. Transcoding pipelines enable you to perform multiple transcodes in parallel. Amazon Elastic Transcoder leverages other Amazon Web Services like Amazon S3, Amazon EC2, Amazon DynamoDB, Amazon Simple Workflow (SWF) and Amazon Simple Notification Service (SNS) to give scalability and reliability.
AWS Integration Amazon Elastic Transcoder provides an important media building block for creating end-to-end media solutions on AWS. For example, you can use Amazon Glacier to store master content, Amazon Elastic Transcoder to transcode masters to renditions for distribution stored in Amazon S3, and stream these renditions at scale over the Internet using Amazon CloudFront.
Question : Your QuickTechie INC wants to implement an order fulfillment process for selling a personalized gadget that needs an average of - days to produce with some orders taking up to 6 months. You expect 10 orders per day on your first day. 1000 orders per day after 6 months and 10,000 orders after 12 months. Orders coming in are checked for consistency then dispatched to your manufacturing plant for production quality control packaging shipment and payment processing. If the product does not meet the quality standards at any stage of the process employees may force the process to repeat a step. Customers are notified via email about order status and any critical issues with their orders such as payment failure.
Your case architecture includes AWS Elastic Beanstalk for your website with an RDS MySQL instance for customer data and orders. How can you implement the order fulfillment process while making sure that the emails are delivered reliably?
1. Add a business process management application to your Elastic Beanstalk app servers and re-use the RDS database for tracking order status use one of the Elastic Beanstalk instances to send emails to customers. 2. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1. Use the decider instance to send emails to customers. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Use an SQS queue to manage all process tasks. Use an Auto Scaling group of EC2 Instances that poll the tasks and execute them. Use SES to send emails to customers.
Explanation: For order processing we will definitely use SWF, hence option 1 and 4 gone. and Option 2 and 3 are same except email service. However, SES is the best solution for sending email. So answer 3 should be correct. Inside the Decider Your Decider code simply polls Simple Workflow asking for decisions to be made, and then decides on the next step. Your code has access to all of the information it needs to make a decision including the type of the workflow and a detailed history of the prior steps taken in the workflow. The Decider can also annotate the workflow with additional data.
Question : An AWS customer runs a public blogging website. The site users upload two million blog entries a month. The average blog entry size is KB. The access rate to blog entries drops to negligible 6 months after publication and users rarely access a blog entry 1 year after publication. Additionally, blog entries have a high update rate during the first 3 months following publication, this drops to no updates after 6 months. The customer wants to use CloudFront to improve his user's load times. Which of the following recommendations would you make to the customer? 1. Duplicate entries into two different buckets and create two separate CloudFront distributions where S3 access is restricted only to Cloud Front identity 2. Create a CloudFront distribution with US/Europe price class for US/Europe users and a different CloudFront distribution with All Edge Locations for the remaining users. 3. Access Mostly Uused Products by 50000+ Subscribers was uploaded to be used with CloudFront behaviors. 4. Create a CloudFront distribution with Restrict Viewer Access Forward Query string set to true and minimum TTL of 0.
Explanation: All options are talking about CloudFront and S3 , hence questions is regarding Get request. If your workload is mainly sending GET requests, you should consider using Amazon CloudFront for performance optimization. Integrating Amazon CloudFront with Amazon S3, you can distribute content to your users with low latency and a high data transfer rate. You will also send fewer direct requests to Amazon S3, which will reduce your costs. For example, suppose you have a few objects that are very popular. Amazon CloudFront will fetch those objects from Amazon S3 and cache them. Amazon CloudFront can then serve future requests for the objects from its cache, reducing the number of GET requests it sends to Amazon S3.
The most obvious optimization when reading objects from S3 is using AWS CloudFront. Being a CDN service it plays the role of a local cache for your users and brings S3 objects to 50+ edge locations around the world, providing low latencies and high transfer rates for read operations due to proximity of the data. Using CloudFront reduces the number of GET requests hitting the original bucket, reducing in turn the cost and the load incurred. Also, both S3 and CloudFront support partial requests for an object (Range GETs), when it's downloaded in parallel smaller units, similar to multipart uploads. Most download managers do exactly that to minimize the time it takes to fetch a file. An alternative optimization for efficient S3 reads would be utilizing S3's BitTorrent support. This way all downloading clients share the content among themselves rather than requesting an S3 bucket for the same object over and over again. Similarly to CloudFront, this reduces the S3 load and the costs involved as significantly less data is transferred from an S3 bucket out to the Internet by the end of the day. One important thing to note, though - BitTorrent only works for S3 objects smaller than 5Gb. As you see, either CloudFront or BitTorrent is a "must have" technique if you're about to publicly distribute S3 objects. Giving out a bare S3 download link is not the way to go for potentially popular content, as it can only go so far when the load increases.
1. You can create Amazon EBS volumes from 1 GiB to 1 TiB in size. You can mount these volumes as devices on your Amazon EC2 instances. 2. You can create point-in-time snapshots of Amazon EBS volumes, which are persisted to Amazon S3. 3. Access Mostly Uused Products by 50000+ Subscribers 4. 1 and 2 4. 2 and 3
1. New volumes created from existing Amazon S3 snapshots load lazily in the background. 2. New volumes created from existing Amazon S3 snapshots loaded first before starting new instance. 3. Access Mostly Uused Products by 50000+ Subscribers 4. New volumes created from existing Amazon S3 snapshots first needs to be decrypted first and then load lazily in the background.
1. To avoid the possibility of increased read or write latency on a production workload, you should first access all of the blocks on the volume to ensure optimal performance 2. To avoid the possibility of increased read or write latency on a production workload, you should wait all the data to be downloaded and then start EC2 instance 3. Access Mostly Uused Products by 50000+ Subscribers 4. None of the above