Question : A -tier e-commerce web application is current deployed on-premises and will be migrated to AWS for greater scalability and elasticity. The web server currently shares read-only data using a network distributed file system. The app server tier uses a clustering mechanism for discovery and shared session state that depends on IP multicast. The database tier uses shared-storage clustering to provide database fail over capability, and uses several read slaves for scaling. Data on all servers and the distributed file system directory is backed up weekly to off-site tapes.
Which AWS storage and database architecture meets the requirements of the application?
1. Web servers, store read-only data in S3, and copy from S3 to root volume at boot time. App servers share state using a combination of DynamoDB and IP unicast. Database use RDS with multi-AZ deployment and one or more Read Replicas. Backup web and app servers backed up weekly via AMIS. Database backed up via DB snapshots. 2. Web servers, store-read-only data in S3, and copy from S3 to root volume at boot time. App servers share state using a combination of DynamoDB and IP unicast Database use RDS with multi-AZ deployment and one or more read replicas. Backup web servers app servers, and database backed up weekly to Glacier using snapshots. 3. Access Mostly Uused Products by 50000+ Subscribers Database use RDS with multi-AZ deployment. Backup web and app servers backed up weekly via AMIs. Database backed up via DB snapshots. 4. Web servers, store read-only data in an EC2 NFS server, mount to each web server at boot time. App servers share state using a combination of DynamoDB and IP multicast. Database use RDS with multl-AZ deployment and one or more Read Replicas. Backup web and app servers backed up weekly via AMI. Database backed up via DB snapshots
Answer: 1
Explanation: Static read only content can be backed in S3. (Hence option 4 is out) Database uses read slaves, which can be replced by read replica in RDS (Hence option 3 is out, which does not talk about Read Replica) There are two ways to backup RDS database Automated Backup
Automated backup is an Amazon RDS feature that automatically creates a backup of your database. Automated backups are enabled by default for a new DB instance.
An automated backup occurs during a daily user-configurable period of time known as the preferred backup window. Backups created during the backup window are retained for a user-configurable number of days (the backup retention period). Note that if the backup requires more time than allotted to the backup window, the backup will continue to completion.
DB Snapshots
DB snapshots are user-initiated and enable you to back up your DB instance in a known state as frequently as you wish, and then restore to that specific state at any time. DB snapshots can be created with the Amazon RDS console or the CreateDBSnapshot action in the Amazon RDS API. DB snapshots are kept until you explicitly delete them with the Amazon RDS console or the DeleteDBSnapshot action in the Amazon RDS API.
Hence option 1 is correct.
Question : Your customer wishes to deploy an enterprise application to AWS which will consist of several web servers, several application servers and a small (GB) Oracle database information is stored, both in the database and the file systems of the various servers. The backup system must support database recovery whole server and whole disk restores, and individual file restores with a recovery time of no more than two hours. They have chosen to use RDS Oracle as the database. Which backup architecture will meet these requirements? 1. Backup RDS using automated daily DB backups. Backup the EC2 instances using AMIs and supplement with file-level backup to S3 using traditional enterprise backup software to provide file level restore. 2. Backup RDS using a Multi-AZ Deployment Backup the EC2 instances using Amis, and supplement by copying file system data to S3 to provide file level restore. 3. Access Mostly Uused Products by 50000+ Subscribers traditional enterprise backup software to provide file level restore. 4. Backup RDS database to S3 using Oracle RMAN Backup, the EC2 instances using Amis, and supplement with EBS snapshots for individual volume restore.
Answer: 1
Explanation: As recovery should be less than 2 hours, hence Glacier is not a good solution. (Option 3 is out). It is always a good idea to create backups of your working instance configurations by bundling a custom AMI after any modifications. Oracle Backup and Recovery : Automated Backups - Turned on by default, the automated backup feature of Amazon RDS enables point-in-time recovery for your DB Instance. Amazon RDS will backup your database and transaction logs and store both for a user-specified retention period. This allows you to restore your DB Instance to any second during your retention period, up to the last five minutes. Your automatic backup retention period can be configured to up to thirty five days. DB Snapshots - DB Snapshots are user-initiated backups of your DB Instance. These full database backups will be stored by Amazon RDS until you explicitly delete them. You can create a new DB Instance from a DB Snapshot whenever you desire.
Question : You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of around 100 sensors for 3 months. Each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS. During the pilot, you measured a peak or 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database. The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a PostgreSQL RDS database with 500GB standard storage. The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan requires a deployment of at least 1O0K sensors which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year Improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup win meet the requirements?
1. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance 2. Ingest data into a DynamoDB table and move old data to a Redshift cluster 3. Access Mostly Uused Products by 50000+ Subscribers 4. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS
Answer: 2 Explanation: 3GB data by 100 Sensor per month. If we will have 100K sensor than 3000GB data per month. For so much volume of data RDS in not a good option and also it is good for structure data. Amazon DynamoDB is ideal for existing or new applications that need a flexible NoSQL database with low read and write latencies, and the ability to scale storage and throughput up or down as needed without code changes or downtime. Common use cases include: mobile apps, gaming, digital ad serving, live voting and audience interaction for live events, sensor networks, log ingestion, access control for web-based content, metadata storage for Amazon S3 objects, ecommerce shopping carts, and web session management. Many of these use cases require a highly available and scalable database because downtime or performance degradation has an immediate negative impact on an organization's business. Amazon DynamoDB uses SSD drives and is optimized for workloads with a high I/O rate per GB stored. If you plan to store very large amounts of data that are infrequently accessed, other storage options, such as Amazon S3, may be a better choice.
Amazon Redshift is ideal for analyzing large datasets using your existing business intelligence tools. Organizations are using Amazon Redshift to do the following: . Analyze global sales data for multiple products . Store historical stock trade data . Analyze ad impressions and clicks . Aggregate gaming data . Analyze social trends . Measure clinical quality, operation efficiency, and financial performance in the health care space
1. AWS does not have sufficient capacity in that availability zone 2. AWS zone mapping is changed for that user account 3. Access Mostly Uused Products by 50000+ Subscribers 4. The user account has reached the maximum EC2 instance limit
1. Define the group policy and add a condition which allows the access based on the IAM name 2. Create a DynamoDB table with the same name as the IAM user name and define the policy rule which grants access based on the DynamoDB ARN using a variable 3. Access Mostly Uused Products by 50000+ Subscribers 4. It is not possible to have a group level policy which allows different IAM users to different DynamoDB Tables
Correct Answer : Exp : As looking at the Question RTO is (Within hrs , you should be able to recover) and RPO is hours (It is ok , if we loose data uploaded in last hours). Only modified data needs to be synchronized hence, we need to able to copy incremental data. Amazon anouncement as below We are excited to announce the availability of "DynamoDB Cross-Region Copy" feature in AWS Data Pipeline service. DynamoDB Cross-Region Copy enables you to configure periodic copy of DynamoDB table data from one AWS region to a DynamoDB table in another region (or to a different table in the same region). Using this feature can enable you to deliver applications from other AWS regions using the same data, as well as enabling you to create a copy of your data in another region for disaster recovery purposes.
To get started with this feature, from the AWS Data Pipeline console choose the "Cross Region DynamoDB Copy" Data Pipeline template and select the source and destination DynamoDB tables you want to copy from and to. You can also choose whether you want to perform an incremental or full copy of a table. Specify the time you want the first copy to start and the frequency of the copy, and your scheduled copy will be ready to go. Hence option 2,3 and 4 is out. So option 1 should be correct.
Question : QuickTechie.com is serving on-demand training videos to your workforce. Videos are uploaded monthly in high resolution MP format. Your workforce is distributed globally often on the move and using company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video. Your company has no video transcoding expertise and it required, you may need to pay for a consultant. How do you implement the most cost-efficient architecture without compromising high availability and quality of video delivery? 1. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. S3 to host videos with Lifecycle Management to archive original files to Glacier after a few days. CloudFront to serve HLS transcoded videos from S3 2. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number or nodes depending on the length of the queue S3 to host videos with Lifecycle Management to archive all files to Glacier after a few days. CloudFront to serve HLS transcoding videos from Glacier 3. Access Mostly Uused Products by 50000+ Subscribers after a few days. CioudFront to serve HLS transcoded videos from EC2. 4. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. CloudFront to serve HLS transcoded videos from EC2
1. Add a business process management application to your Elastic Beanstalk app servers and re-use the RDS database for tracking order status use one of the Elastic Beanstalk instances to send emails to customers. 2. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1. Use the decider instance to send emails to customers. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Use an SQS queue to manage all process tasks. Use an Auto Scaling group of EC2 Instances that poll the tasks and execute them. Use SES to send emails to customers.