Premium

AWS Certified Solutions Architect - Professional Questions and Answers (Dumps and Practice Questions)



Question : A read only news reporting site with a combined web and application tier and a database tier that receives large and unpredictable traffic demands must be able to
respond to these traffic fluctuations automatically. What AWS services should be used meet these requirements?
  : A read only news reporting site with a combined web and application tier and a database tier that receives large and unpredictable traffic demands must be able to
1. Stateless instances for the web and application tier synchronized using Elasticache Memcached in an autoscaling group monitored with CloudWatch. And RDS with read
replicas
2. Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch and RDS with read replicas
3. Access Mostly Uused Products by 50000+ Subscribers
4. Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch and multi-AZ RDS



Correct Answer : Get Lastest Questions and Answer : Benefit of Read Replica is : You can reduce the load on your source DB Instance by routing read queries from your applications to the read replica. Read replicas
allow you to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.

Increased Availability : Read replicas in Amazon RDS for MySQL and PostgreSQL provide a complementary availability mechanism to Amazon RDS Multi-AZ Deployments. You can use read
replica promotion as a data recovery scheme if the source DB instance fails; however, if your use case requires synchronous replication, automatic failure detection, and failover, we
recommend that you run your DB instance as a Multi-AZ deployment instead.

Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads (such as social networking, gaming, media sharing and Q and A
portals) or compute-intensive workloads (such as a recommendation engine). Caching improves application performance by storing critical pieces of data in memory for low-latency
access. Cached information may include the results of I/O-intensive database queries or the results of computationally-intensive calculations. Applications needing a data structure
server, will find the Redis engine most useful.







Question : A company is running a batch analysis every hour on their main transactional DB. Transactional DB running on an RDS MySQL instance. To populate their central Data
Warehouse running on Redshift. During the execution of the batch their transactional applications are very slow. When the batch completes they need to update the top management
dashboard with the new data . The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is
required. The on-premises system cannot be modified because is managed by another team. How would you optimize this scenario to solve performance issues and automate the process as
much as possible?


  : A company is running a batch analysis every hour on their main transactional DB. Transactional DB running on an RDS MySQL instance. To populate their central Data
1. Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard
2. Replace RDS with Redshift for the batch analysis and SQS to send a message to the on-premises system to update the dashboard
3. Access Mostly Uused Products by 50000+ Subscribers
4. Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.


Correct Answer : Get Lastest Questions and Answer :
Explanation: benchmarked Amazon Redshift against Amazon RDS , Redshift to be 100-1000 times faster on common analytics queries. Amazon Redshift delivers fast query
performance by using columnar storage technology to improve I/O efficiency and parallelizing queries across multiple nodes. Amazon Redshift has custom JDBC and ODBC drivers that you
can download from the Connect Client tab of our Console, allowing you to use a wide range of familiar SQL clients. You can also use standard PostgreSQL JDBC and ODBC drivers. Data
load speed scales linearly with cluster size, with integrations to Amazon S3, Amazon DynamoDB, Amazon Elastic MapReduce, Amazon Kinesis or any SSH-enabled host.
Amazon Redshift uses a variety of innovations to obtain very high query performance on datasets ranging in size from a hundred gigabytes to a petabyte or more. It uses columnar
storage, data compression, and zone maps to reduce the amount of I/O needed to perform queries. Amazon Redshift has a massively parallel processing (MPP) data warehouse architecture,
parallelizing and distributing SQL operations to take advantage of all available resources. The underlying hardware is designed for high performance data processing, using local
attached storage to maximize throughput between the CPUs and drives, and a 10GigE mesh network to maximize throughput between nodes.

Amazon Simple Notification Service (Amazon SNS) is a fast, flexible, fully managed push notification service that lets you send individual messages or to fan-out messages to large
numbers of recipients. Amazon SNS makes it simple and cost effective to send push notifications to mobile device users, email recipients or even send messages to other distributed
services.

With Amazon SNS, you can send notifications to Apple, Google, Fire OS, and Windows devices, as well as to Android devices in China with Baidu Cloud Push. You can use SNS to send SMS
messages to mobile device users in the US or to email recipients worldwide.






Question : Your customer is willing to consolidate their log streams (access logs application logs security logs etc.) in one single system. Once consolidated, the customer
wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from
the last 12 hours?

What is the best approach to meet your customer's requirements?


  : Your customer is willing to consolidate their log streams (access logs application logs security logs etc.) in one single system. Once consolidated, the customer
1. Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics.
2. Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs
3. Access Mostly Uused Products by 50000+ Subscribers
4. Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs


Correct Answer : Get Lastest Questions and Answer :
Explanation: Amazon Kinesis is a fully managed streaming data service. You can continuously add various types of data such as clickstreams, application logs, and social
media to an Amazon Kinesis stream from hundreds of thousands of sources. Within seconds, the data will be available for your Amazon Kinesis Applications to read and process from the
stream.

The throughput of an Amazon Kinesis stream is designed to scale without limits via increasing the number of shards within a stream. However, there are certain limits you should keep
in mind while using Amazon Kinesis:
What are the limits of Amazon Kinesis?
Records of a stream are accessible for up to 24 hours from the time they are added to the stream.
The maximum size of a data blob (the data payload before Base64-encoding) within one record is 1 megabyte (MB).
Each shard can support up to 1000 PUT records per second.





Related Questions


Question : An organization is planning to setup Wordpress on the AWS VPC. The organization needs automated HA and DR along with an
high security standard. Which of the below mentioned configurations satisfies the organization's requirement?
 : An organization is planning to setup Wordpress on the AWS VPC. The organization needs automated HA and DR along with an
1. Create two separate VPCs and run RDS. RDS will have the multi AZ feature enabled which spans across these two VPCs using VPC peering. Setup the App server with one of
the public subnets of any VPC.
2. Create a VPC with one private and one public subnet in separate AZs. Setup the EC2 instance with a DB in the private subnet and the web application in a public subnet.
3. Access Mostly Uused Products by 50000+ Subscribers
in the public subnet.
4. Create two separate VPCs in different zones. Setup two EC2 instances by installing a DB in the two different VPCs and enable the failover mechanism. Setup the App server with one of the public subnets of any VPC.


Question : An organization is planning to host an application on the AWS VPC. The organization wants dedicated instances. However, an AWS consultant advised the
organization not to use dedicated instances with VPC as the design has a few limitations. Which of the below mentioned statements is not a limitation of dedicated
instances with VPC?
 : An organization is planning to host an application on the AWS VPC. The organization wants dedicated instances. However, an AWS consultant advised the
1. All instances launched with this VPC will always be dedicated instances and the user cannot use a default tenancy model for them.
2. The EBS volume will not be on the same tenant hardware as the EC2 instance though the user has configured dedicated tenancy.
3. Access Mostly Uused Products by 50000+ Subscribers
4. The user cannot use Reserved Instances with a dedicated tenancy model.



Question : An application is running on AWS EC. The application has no usage (almost zero load) between PM to AM and AM to PM. The application
experiences higher CPU utilization between 8 AM to 10 AM and 7 PM to 9 PM. Which of the below mentioned EC2 configurations will be more helpful and
cost effective in this scenario?
 : An application is running on AWS EC. The application has no usage (almost zero load) between  PM to  AM and  AM to  PM. The application
1. Use EC2 with the spot instance model to scale up and down using Auto Scaling.
2. Use EC2 with Auto Scaling which will scale up when the load increases.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Use EC2 with ELB which distributes the higher load effectively.




Question : A customer has a website which shows all the deals available across the market. The site experiences a load of large EC instances generally.
However, a week before Thanksgiving vacation they encounter a load of almost 20 large instances. The load during that period varies over the day based on
the office timings. Which of the below mentioned solutions is cost effective as well as help the website achieve better performance?
 : A customer has a website which shows all the deals available across the market. The site experiences a load of  large EC instances generally.
1. Keep only 10 instances running and manually launch 10 instances every day during office hours.
2. Setup to run 10 instances during the pre-vacation period and only scale up during the office time by launching 10 more instances using the AutoScaling schedule.
3. Access Mostly Uused Products by 50000+ Subscribers
4. During the pre-vacation period setup a scenario where the organization has 15 instances running and 5 instances to scale up and down using Auto Scaling based on the
network I/O policy.





Question : QuickTechie.com has created multiple components of a single application for compartmentalization. Currently all the components are hosted on a single EC instance.
Due to security reasons QuickTechie wants to implement two separate SSLs for the separate modules although it is already using VPC.
How can the organization achieve this with a single instance?
  : QuickTechie.com has created multiple components of a single application for compartmentalization. Currently all the components are hosted on a single EC instance.
1. Create a VPC instance which will have multiple network interfaces with multiple elastic IP addresses.
2. You have to launch two instances each in a separate subnet and allow VPC peering for a single IP.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Create a VPC instance which will have multiple subnets attached to it and each will have a separate IP address.


Question : Acmeshell.com is planning to create a secure scalable application with AWS VPC and ELB and has two instances already running and each instance has an
ENI attached to it in addition to a primary network interface. The primary network interface and additional ENI both have an elastic IP attached to it.
If those instances are registered with ELB and Acmeshell wants ELB to send data to a particular EIP of the instance, how can they achieve this?
  : Acmeshell.com is planning to create a secure scalable application with AWS VPC and ELB and has two instances already running and each instance has an
1. Acmeshell should ensure that the IP which is required to receive the ELB traffic is attached to an additional ENI.
2. It is not possible to attach an instance with two ENIs with ELB as it will give an IP conflict error.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Acmeshell should ensure that the IP which is required to receive the ELB traffic is attached to a primary network interface.