Premium

AWS Certified Solutions Architect - Professional Questions and Answers (Dumps and Practice Questions)



Question : QuickTechie.com is setting up a multi-site solution where the application runs on premise as well as on AWS to achieve the minimum RTP. Which of the
below mentioned configurations will not meet the requirements of the multi-site solution scenario?
 : QuickTechie.com is setting up a multi-site solution where the application runs on premise as well as on AWS to achieve the minimum RTP. Which of the
1. Configure data replication based on RTO.
2. Setup a single DB instance which will be accessed by both sites.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Setup a weighted DNS service like Route 53 to route traffic across sites.



Correct Answer : Get Lastest Questions and Answer :


Explanation: AWS has many solutions for DR and HA. When the organization wants to have HA and DR with multi-site solution, it should setup two sites: one on premise and the other on AWS
with full capacity. The organization should setup a weighted DNS service which can route traffic to both sites based on the weightage. When one of the sites fails it can route the
entire load to another site. The organization would have minimal RTP in this scenario. If the organization setups a single DB instance, it will not work well in failover.
Instead they should have two separate DBs in each site and setup data replication based on RTO of the organization.







Question : If a disaster occurs at : PM (noon) and the RPO is one hour, the system should recover all data that was in the system

 :  If a disaster occurs at : PM (noon) and the RPO is one hour, the system should recover all data that was in the system
1. before 11:00 AM
2. before 12:00 PM
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of above


Correct Answer : Get Lastest Questions and Answer : Exp: Recovery point objective (RPO)2 - The acceptable amount of data loss measured in time. For example, if a disaster
occurs at 12:00 PM (noon) and the RPO is one hour, the system should recover all data that was in the system before
11:00 AM. Data loss will span only one hour, between 11:00 AM and 12:00 PM (noon).
A company typically decides on an acceptable RTO and RPO based on the financial impact to the business when systems
are unavailable. The company determines financial impact by considering many factors,such as the loss of business and
damage to its reputation due to downtime and the lack of systems availability.





Question : In the scenerio AWS Production to an AWS DR Solution Using Multiple AWS Regions
When you replicate data to a remote location, you should consider

A. Distance between the sites
B. Available bandwidth
C. Data rate required by your application
D. Replication technology
 : In the scenerio AWS Production to an AWS DR Solution Using Multiple AWS Regions
1. A,B,C
2. B,C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,B,C,D


Correct Answer : Get Lastest Questions and Answer :
Exp: Applications deployed on AWS have multi-site capability by means of multiple Availability Zones. Availability Zones are
distinct locations that are engineered to be insulated from each other. They provide inexpensive, low-latency network
connectivity within the same region.
Some applications might have an additional requirement to deploy their components using multiple regions; this can be
a business or regulatory requirement.
Any of the preceding scenarios in this whitepaper can be deployed using separate AWS regions. The advantagesfor both
production and DR scenariosinclude the following:
? You don't need to negotiate contracts with another provider in another region
? You can use the same underlying AWS technologies across regions
? You can use the same tools or APIs
When you replicate data to a remote location, you should consider these factors:
? Distance between the sites - Larger distances typically are subject to more latency or jitter.
? Available bandwidth- The breadth and variability of the interconnections.
? Data rate required by your application - The data rate should be lower than the available bandwidth.
? Replication technology- The replication technology should be parallel (so that it can use the network
effectively).







Related Questions


Question : You have an application running on an EC Instance which will allow users to download files from a private S bucket using a pre-assigned URL. Before generating the URL
the application should verify the existence of the file in S3. How should the application use AWS credentials to access the S3 bucket securely?
 : You have an application running on an EC Instance which will allow users to download files from a private S bucket using a pre-assigned URL. Before generating the URL
1. Use the AWS account access Keys the application retrieves the credentials from the source code of the application.
2. Create a IAM user for the application with permissions that allow list access to the S3 bucket. Launch the instance as the IAM user and retrieve the IAM user's
credentials from the EC2 instance user data.
3. Access Mostly Uused Products by 50000+ Subscribers
Instance metadata
4. Create an IAM user for the application with permissions that allow list access to the S3 bucket. The application retrieves the IAM user credentials from a temporary
directory with permissions that allow read access only to the application user.


Question : You would like to create a mirror image of your production environment in another region for disaster recovery purposes. Which of the following AWS resources do not
need to be recreated in the second region? (Choose 2 answers)

A. Route 53 Record Sets
B. IAM Roles
C. Elastic IP Addresses (EIP)
D. EC2 Key Pairs
E. Launch configurations
F. Security Groups


 : You would like to create a mirror image of your production environment in another region for disaster recovery purposes. Which of the following AWS resources do not
1. A,C
2. B,C
3. Access Mostly Uused Products by 50000+ Subscribers
4. E,F
5. D,F



Question : Your application is using an ELB in front of an Auto Scaling group of web/application servers deployed across two AZs and a Multi-AZ RDS Instance for data persistence.
The database CPU is often above 80% usage and 90% of I/O operations on the database are reads. To improve performance you recently added a single-node Memcached ElastiCache Cluster
to cache frequent DB query results. In the next weeks the overall workload is expected to grow by 30%. Do you need to change anything in the architecture to maintain the high
availability or the application with the anticipated additional load' Why?


  : Your application is using an ELB in front of an Auto Scaling group of web/application servers deployed across two AZs and a Multi-AZ RDS Instance for data persistence.
1. Yes. you should deploy two Memcached ElastiCache Clusters in different AZs because the RDS Instance will not Be able to handle the load If the cache node fails.
2. No. if the cache node fails the automated ElastiCache node recovery feature will prevent any availability impact.
3. Access Mostly Uused Products by 50000+ Subscribers
4. No if the cache node fails you can always get the same data from the DB without having any availability impact.




Question : A read only news reporting site with a combined web and application tier and a database tier that receives large and unpredictable traffic demands must be able to
respond to these traffic fluctuations automatically. What AWS services should be used meet these requirements?
  : A read only news reporting site with a combined web and application tier and a database tier that receives large and unpredictable traffic demands must be able to
1. Stateless instances for the web and application tier synchronized using Elasticache Memcached in an autoscaling group monitored with CloudWatch. And RDS with read
replicas
2. Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch and RDS with read replicas
3. Access Mostly Uused Products by 50000+ Subscribers
4. Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch and multi-AZ RDS




Question : A company is running a batch analysis every hour on their main transactional DB. Transactional DB running on an RDS MySQL instance. To populate their central Data
Warehouse running on Redshift. During the execution of the batch their transactional applications are very slow. When the batch completes they need to update the top management
dashboard with the new data . The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is
required. The on-premises system cannot be modified because is managed by another team. How would you optimize this scenario to solve performance issues and automate the process as
much as possible?


  : A company is running a batch analysis every hour on their main transactional DB. Transactional DB running on an RDS MySQL instance. To populate their central Data
1. Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard
2. Replace RDS with Redshift for the batch analysis and SQS to send a message to the on-premises system to update the dashboard
3. Access Mostly Uused Products by 50000+ Subscribers
4. Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.



Question : Your customer is willing to consolidate their log streams (access logs application logs security logs etc.) in one single system. Once consolidated, the customer
wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from
the last 12 hours?

What is the best approach to meet your customer's requirements?


  : Your customer is willing to consolidate their log streams (access logs application logs security logs etc.) in one single system. Once consolidated, the customer
1. Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics.
2. Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs
3. Access Mostly Uused Products by 50000+ Subscribers
4. Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs