Question : QuickTechie.com is setting up a multi-site solution where the application runs on premise as well as on AWS to achieve the minimum RTP. Which of the below mentioned configurations will not meet the requirements of the multi-site solution scenario? 1. Configure data replication based on RTO. 2. Setup a single DB instance which will be accessed by both sites. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Setup a weighted DNS service like Route 53 to route traffic across sites.
Explanation: AWS has many solutions for DR and HA. When the organization wants to have HA and DR with multi-site solution, it should setup two sites: one on premise and the other on AWS with full capacity. The organization should setup a weighted DNS service which can route traffic to both sites based on the weightage. When one of the sites fails it can route the entire load to another site. The organization would have minimal RTP in this scenario. If the organization setups a single DB instance, it will not work well in failover. Instead they should have two separate DBs in each site and setup data replication based on RTO of the organization.
Question : If a disaster occurs at : PM (noon) and the RPO is one hour, the system should recover all data that was in the system
Correct Answer : Get Lastest Questions and Answer : Exp: Recovery point objective (RPO)2 - The acceptable amount of data loss measured in time. For example, if a disaster occurs at 12:00 PM (noon) and the RPO is one hour, the system should recover all data that was in the system before 11:00 AM. Data loss will span only one hour, between 11:00 AM and 12:00 PM (noon). A company typically decides on an acceptable RTO and RPO based on the financial impact to the business when systems are unavailable. The company determines financial impact by considering many factors,such as the loss of business and damage to its reputation due to downtime and the lack of systems availability.
Question : In the scenerio AWS Production to an AWS DR Solution Using Multiple AWS Regions When you replicate data to a remote location, you should consider
Correct Answer : Get Lastest Questions and Answer : Exp: Applications deployed on AWS have multi-site capability by means of multiple Availability Zones. Availability Zones are distinct locations that are engineered to be insulated from each other. They provide inexpensive, low-latency network connectivity within the same region. Some applications might have an additional requirement to deploy their components using multiple regions; this can be a business or regulatory requirement. Any of the preceding scenarios in this whitepaper can be deployed using separate AWS regions. The advantagesfor both production and DR scenariosinclude the following: ? You don't need to negotiate contracts with another provider in another region ? You can use the same underlying AWS technologies across regions ? You can use the same tools or APIs When you replicate data to a remote location, you should consider these factors: ? Distance between the sites - Larger distances typically are subject to more latency or jitter. ? Available bandwidth- The breadth and variability of the interconnections. ? Data rate required by your application - The data rate should be lower than the available bandwidth. ? Replication technology- The replication technology should be parallel (so that it can use the network effectively).
1. Yes. you should deploy two Memcached ElastiCache Clusters in different AZs because the RDS Instance will not Be able to handle the load If the cache node fails. 2. No. if the cache node fails the automated ElastiCache node recovery feature will prevent any availability impact. 3. Access Mostly Uused Products by 50000+ Subscribers 4. No if the cache node fails you can always get the same data from the DB without having any availability impact.
1. Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard 2. Replace RDS with Redshift for the batch analysis and SQS to send a message to the on-premises system to update the dashboard 3. Access Mostly Uused Products by 50000+ Subscribers 4. Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.
1. Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics. 2. Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs 3. Access Mostly Uused Products by 50000+ Subscribers 4. Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs