Question : Your company hosts a social media site supporting users in multiple countries. You have been asked to provide a highly available design for the application that leverages multiple regions for the most recently accessed content and latency sensitive portions of the website. The most latency sensitive component of the application involves reading user preferences to support web site personalization and ad selection. In addition to running your application in multiple regions, which option will support this application's requirements?
1. Serve user content from S3. CloudFront and use Route53 latency-based routing between ELBs in each region Retrieve user preferences from a local DynamoDB table in each region and leverage SQS to capture changes to user preferences with SOS workers for propagating updates to each table. 2. Use the S3 Copy API to copy recently accessed content to multiple regions and serve user content from S3. CloudFront with dynamic content and an ELB in each region Retrieve user preferences from an ElasticCache cluster in each region and leverage SNS notifications to propagate user preference changes to a worker node in each region. 3. Access Mostly Uused Products by 50000+ Subscribers user content from S3 CloudFront and Route53 latency-based routing Between ELBs In each region Retrieve user preferences from a DynamoDB table and leverage SQS to capture changes to user preferences with SOS workers for propagating DynamoDB updates. 4. Serve user content from S3. CloudFront with dynamic content, and an ELB in each region Retrieve user preferences from an ElastiCache cluster in each region and leverage Simple Workflow (SWF) to manage the propagation of user preferences from a centralized DB to each ElastiCache cluster.
Answer: 2
Explanation: ElastiCache is good for faster data retrieval, which is regularly used. A typical website generally contains a mix of static and dynamic content. Static content includes images or style sheets; dynamic or application generated content includes elements of your site that are personalized to each viewer. Previously, developers who wanted to improve the performance and reliability of their dynamic content had limited options, as the solutions offered by traditional CDNs are expensive, hard to configure and difficult to manage. With Amazon CloudFront, there are no additional costs for serving dynamic content beyond Amazon CloudFront's existing low prices for data transfer and requests, and no required long term commitments for use. There are also no up-front fees, no monthly platform fees, and no need to hire expensive consultants to help with configuration.
Question : Your company has HQ in Tokyo and branch offices all over the world and is using a logistics software with a multi-regional deployment on AWS in Japan, Europe and USA. The logistic software has a 3-tier architecture and currently uses MySQL 5.6 for data persistence. Each region has deployed its own database In the HQ region you run an hourly batch process reading data from every region to compute cross-regional reports that are sent by email to all offices this batch process must be completed as fast as possible to quickly optimize logistics how do you build the database architecture in order to meet the requirements'? 1. For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region 2. For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region 3. Access Mostly Uused Products by 50000+ Subscribers hourly RDS snapshots to the HQ region 4. For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to the HQ region 5. Use Direct Connect to connect all regional MySQL deployments to the HQ region and reduce network latency for the batch process
Answer: 1
Explanation: Direct Connect is not a solution for such problem. (5 th is out). Amazon RDS Read Replicas provide enhanced performance and durability for Database (DB) Instances. This replication feature makes it easy to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted, so that they become standalone DB Instances. Read replicas are available in Amazon RDS for MySQL, PostgreSQL, and Amazon Aurora. When you create a read replica, you specify an existing DB Instance as the source. Amazon RDS takes a snapshot of the source instance and creates a read-only instance from the snapshot. For MySQL and PostgreSQL, Amazon RDS uses those engines' native asynchronous replication to update the read replica whenever there is a change to the source DB instance. The read replica operates as a DB instance that allows only read-only connections; applications can connect to a read replica just as they would to any DB instance. Amazon RDS replicates all databases in the source DB instance. Amazon Aurora employes an SSD-backed virtualized storage layer purpose-built for database workloads. Amazon Aurora Replicas and share the same underlying storage as the source instance, lowering costs and avoiding the need to copy data to the replica nodes.
Question : You need a persistent and durable storage to trace call activity of an IVR (Interactive Voice Response) system. Call duration is mostly in the 2-3 minutes timeframe. Each traced call can be either active or terminated. An external application needs to know each minute the list of currently active calls, which are usually a few calls/second. Put once per month there is a periodic peak up to 1000 calls/second for a few hours. The system is open 24/7 and any downtime should be avoided. Historical data is periodically archived to files. Cost saving is a priority for this project. What database implementation would better fit this scenario, keeping costs as low as possible?
1. Use RDS Multi-AZ with two tables, one for "Active calls" and one for "Terminated calls". In this way the "Active calls" table is always small and effective to access. 2. Use DynamoDB with a "Calls" table and a Global Secondary Index on a "IsActive'" attribute that is present for active calls only. In this way the Global Secondary index is sparse and more effective. 3. Access Mostly Uused Products by 50000+ Subscribers that can equal to "active" or "terminated" in this way the Global Secondary index can be used for all Items in the table. 4. Use RDS Multi-AZ with a "CALLS" table and an Indexed "STATE* field that can be equal to 'ACTIVE" or "TERMINATED" In this way the SOL query Is optimized by the use of the Index.
Answer: 2 Explanation: Take Advantage of Sparse Indexes
For any item in a table, DynamoDB will only write a corresponding entry to a global secondary index if the index key value is present in the item. For global secondary indexes, this is the index partition key and its sort key (if present). If the index key value(s) do not appear in every table item, the index is said to be sparse.
You can use a sparse global secondary index to efficiently locate table items that have an uncommon attribute. To do this, you take advantage of the fact that table items that do not contain global secondary index attribute(s) are not indexed at all. For example, in the GameScores table, certain players might have earned a particular achievement for a game - such as "Champ" - but most players have not. Rather than scanning the entire GameScores table for Champs, you could create a global secondary index with a partition key of Champ and a sort key of UserId. This would make it easy to find all the Champs by querying the index instead of scanning the table.
Such a query can be very efficient, because the number of items in the index will be significantly fewer than the number of items in the table. In addition, the fewer table attributes you project into the index, the fewer read capacity units you will consume from the index.
1. Use a separate ELB for each instance type and distribute load to ELBs with Route 53 weighted round robin 2. Configure Autoscaning group and Launch Configuration with ELB to add up to 10 more on-demand mi large instances when triggered by Cloudwatch shut off c3 2xiarge instances 3. Access Mostly Uused Products by 50000+ Subscribers based routing and health checks shut off ELB 4. Configure ELB with two c3 2xiarge Instances and use on-demand Autoscailng group for up to two additional c3.2xlarge instances Shut on mi .large instances.