Premium

AWS Certified Solutions Architect – Associate Questions and Answers (Dumps and Practice Questions)



Question : In context of VPC, what is the default maximum number of Internet gateways allowed per region?

  : In context of VPC, what is the default maximum number of Internet gateways allowed per region?
1. 5
2. 200
3. Access Mostly Uused Products by 50000+ Subscribers
4. No limit
Ans : 1
Exp : You can create as many Internet gateways as your VPCs per region limit. Only one Internet gateway can be attached to a VPC at a time.




Question : Which one of the following statement is incorrect:


  : In context of VPC, what is the default maximum number of Internet gateways allowed per region?
1. AWS Marketplace is the simplest way for developers to get paid for Amazon AMIs or applications they build on top of Amazon S3.
2. AWS Marketplace supports EBS-backed software, where DevPay does not.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Software providers benefit from AWS Marketplace's marketing outreach and ease of discovery.
Ans : 1
Exp : Amazon DevPay is the simplest way for developers to get paid for Amazon EC2 Machine Images (AMIs) or applications they build on top of Amazon S3.
Developers use the simple Amazon DevPay web interface to register their application or AMI with Amazon DevPay and configure their desired pricing. They
embed
the Amazon DevPay purchase pipeline link into their website to allow their customers to purchase their product. Amazon DevPay allows developers to start
selling their application without using complex APIs or writing code to build an order pipeline or a billing system.

Amazon DevPay is the only payments application that automatically meters your customers' usage of Amazon Web Services (such as Amazon S3 or Amazon EC2)
and
allows you to charge your customers for that usage at whatever price you choose.
Amazon DevPay provides you the flexibility to charge for your application based on any combination of a one-time fixed fee, a recurring monthly fee, or
fees
based on the monthly usage of underlying AWS services.
Amazon DevPay also provides account management functions that you'd otherwise have to build and manage yourself. Amazon DevPay keeps track of all your
customers' subscriptions and their associated status. When customers request access to your application, Amazon DevPay authenticates these customers and
determines whether they have the requisite credentials and payment standing to use your application. Amazon DevPay also provides you with business
reports to
view revenue, cost and AWS service usage by customer.
Amazon DevPay shares the risk of customer nonpayment with developers. You're responsible for the cost of AWS services that a customer consumes only up
to the
amount that the customer actually pays. If a customer does not pay, we do not charge you these costs




Question : A company is running a batch analysis every hour on their main transactional DB. Transactional DB running on an RDS MySQL instance. To populate their central Data
Warehouse running on Redshift. During the execution of the batch their transactional applications are very slow. When the batch completes they need to update the top management
dashboard with the new data . The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies
that an update is required. The on-premises system cannot be modified because is managed by another team. How would you optimize this scenario to solve performance issues and automate the process as much as possible?


  : In context of VPC, what is the default maximum number of Internet gateways allowed per region?
1. Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard
2. Replace RDS with Redshift for the batch analysis and SQS to send a message to the on-premises system to update the dashboard
3. Access Mostly Uused Products by 50000+ Subscribers
4. Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.


Correct Answer : Get Lastest Questions and Answer :
Explanation: benchmarked Amazon Redshift against Amazon RDS , Redshift to be 100-1000 times faster on common analytics queries. Amazon
Redshift
delivers fast query performance by using columnar storage technology to improve I/O efficiency and parallelizing queries across multiple nodes. Amazon
Redshift has custom JDBC and ODBC drivers that you can download from the Connect Client tab of our Console, allowing you to use a wide range of familiar
SQL
clients. You can also use standard PostgreSQL JDBC and ODBC drivers. Data load speed scales linearly with cluster size, with integrations to Amazon S3,
Amazon DynamoDB, Amazon Elastic MapReduce, Amazon Kinesis or any SSH-enabled host.
Amazon Redshift uses a variety of innovations to obtain very high query performance on datasets ranging in size from a hundred gigabytes to a petabyte or
more. It uses columnar storage, data compression, and zone maps to reduce the amount of I/O needed to perform queries. Amazon Redshift has a massively
parallel processing (MPP) data warehouse architecture, parallelizing and distributing SQL operations to take advantage of all available resources. The
underlying hardware is designed for high performance data processing, using local attached storage to maximize throughput between the CPUs and drives,
and a
10GigE mesh network to maximize throughput between nodes.

Amazon Simple Notification Service (Amazon SNS) is a fast, flexible, fully managed push notification service that lets you send individual messages or to
fan-out messages to large numbers of recipients. Amazon SNS makes it simple and cost effective to send push notifications to mobile device users, email
recipients or even send messages to other distributed services.

With Amazon SNS, you can send notifications to Apple, Google, Fire OS, and Windows devices, as well as to Android devices in China with Baidu Cloud
Push. You
can use SNS to send SMS messages to mobile device users in the US or to email recipients worldwide.







Question : Your VPC includes a default security group whose initial rules are to deny all inbound traffic,
allow all outbound traffic, and all traffic between instances in the group will be

  : Your VPC includes a default security group whose initial rules are to deny all inbound traffic,
1. Increased
2. Reduced
3. Access Mostly Uused Products by 50000+ Subscribers
4. Allowed
Ans : 4
Explanation:Your VPC includes a default security group whose initial rules are to deny all inbound traffic, allow all outbound traffic, and allow all
traffic
between instances in the group. You can't delete this group; however, you can change the group's rules. The procedure is the same as modifying any other
security group.



Question : Which of the following two components Elastic Load Balancing (ELB) consists of

  : Your VPC includes a default security group whose initial rules are to deny all inbound traffic,
1. Load Balancer AND Load Monitoring Service
2. Load Distribution Controller AND Load Monitoring Service
3. Access Mostly Uused Products by 50000+ Subscribers
4. Controller Service AND Load Balancer
Ans : 4
Elastic Load Balancing (ELB) consists of two components: the load balancers and the controller service. The load balancers monitor the traffic and handle
requests that come in through the Internet. The controller service monitors the load balancers, adding and removing load balancers as needed and
verifying
that the load balancers are functioning properly.

You have to create your load balancer before you can start using it. Elastic Load Balancing automatically generates a unique Domain Name System (DNS)
name
for each load balancer instance you create. For example, if you create a load balancer named myLB in the us-east-1a, your load balancer might have a DNS
name
such as myLB-1234567890.us-east-1.elb.amazonaws.com. Clients can request access your load balancer by using the ELB generated DNS name.

If you'd rather use a user-friendly domain name, such as www.example.com, instead of the load balancer DNS name, you can create a custom domain name and
then
associate the custom domain name with the load balancer DNS name. When a request is placed to your load balancer using the custom domain name that you
created, it resolves to the load balancer DNS name.

When a client makes a request to your application using either your load balancer's DNS name or the custom domain name, the DNS server returns one or
more IP
addresses. The client then makes a connection to your load balancer at the provided IP address. When Elastic Load Balancing scales, it updates the DNS
record
for the load balancer. The DNS record for the load balancer has the time-to-live (TTL) set to 60 seconds. This setting ensures that IP addresses can be
re-mapped quickly to respond to events that cause Elastic Load Balancing to scale up or down.

When you create a load balancer, you must configure it to accept incoming traffic and route requests to your EC2 instances. The controller ensures that
load
balancers are operating with the correct configuration.



Question : A load balancer is the destination to which all requests intended for your load balanced application should be directed.
Each load balancer can distribute requests to multiple EC2 instances. A load balancer is represented by


  : Your VPC includes a default security group whose initial rules are to deny all inbound traffic,
1. Multiple Availability Zones and EC2 Region
2. A DNS name and a set of ports
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the above
Ans : 2
Exp :A load balancer is the destination to which all requests intended for your load balanced application should be directed. Each load balancer can
distribute requests to multiple EC2 instances. A load balancer is represented by a DNS name and a set of ports. Load balancers can span multiple
Availability
Zones within an EC2 Region, but they cannot span multiple regions.

To create or work with a load balancer in a specific region, use the corresponding regional service endpoint.

Elastic Load Balancing automatically generates a DNS name for each load balancer instance you create. Typically, the DNS name includes the name of the
AWS
region in which the load balancer is created. For example, if you create a load balancer named myLB in the us-east-1a, your load balancer might have a
DNS
name such as myLB-1234567890.us-east-1.elb.amazonaws.com.



Question : By default, a load balancer routes each request independently to the application instance with the


  : Your VPC includes a default security group whose initial rules are to deny all inbound traffic,
1. All requests coming from the user during the session will be sent to the same application instance.
2. A load balancer routes each request independently to the application instance with the smallest load
3. Access Mostly Uused Products by 50000+ Subscribers
4. While setting up the ELB you must have to define Distribution Algorithm, there is no default behaviour.
Ans : 2
Exp : Sticky Sessions

By default, a load balancer routes each request independently to the application instance with the smallest load. However, you can use the sticky session
feature (also known as session affinity), which enables the load balancer to bind a user's session to a specific application instance. This ensures that
all
requests coming from the user during the session will be sent to the same application instance.

The key to managing the sticky session is determining how long your load balancer should consistently route the user's request to the same application
instance. If your application has its own session cookie, then you can set Elastic Load Balancing to create the session cookie to follow the duration
specified by the application's session cookie. If your application does not have its own session cookie, then you can set Elastic Load Balancing to
create a
session cookie by specifying your own stickiness duration. You can associate stickiness duration for only HTTP/HTTPS load balancer listeners.

An application instance must always receive and send two cookies: A cookie that defines the stickiness duration and a special Elastic Load Balancing
cookie
named AWSELB, that has the mapping to the application instance.



Question : You have just implemented ELB in front of fleet of EC servers on which your website is hosted. However, there are some EC instances keep
failing once in a month on average. Which of the following are ways by which ELB to find which all instances are not serving?
A. ELB can send a page request to find whether this server is responding or not
B. ELB can ping the server to find whether it is alive or not
C. ELB will try to login to website using anonymous user and with default password set by admin
D. ELB will try to make connection with the EC2 instance

  : Your VPC includes a default security group whose initial rules are to deny all inbound traffic,
1. A,B,C
2. B,C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,B,D

Correct Answer : Get Lastest Questions and Answer :
Explanation: ELB can use one of the following way to find whether an instance can respond or not.
- Ping the server
- Make a connection attempt with EC2 instance
- Make a page request on EC2





Question : You have a heavy traffic website launched on fleet of EC servers, which are behind the ELB. You also have configured CloudWatch
monitoring which sends the EC2 instance status on mobile through SNS. Now, in the middle of night you received an alert that, one of the EC2 instance is
down. What do you think, that how ELB can help in this case assuming connection draining is enabled?
A. ELB will check that instance health and send the error page to the end user. Error page is defined by you.
B. ELB will try to finish all the in-flight requests until timeout happens.
C. In this case, connection will remain open with the instance and end user will never get any response and he has to refresh the page to open a new request.
D. ELB will immediately close all the connection with the EC2 instance.

 : You have a heavy traffic website launched on fleet of EC servers, which are behind the ELB.  You also have configured CloudWatch
1. A,B
2. B,C
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,D
5. B,D

Correct Answer : Get Lastest Questions and Answer :
Explanation: : If you have enabled connection draining on ELB then, ELB will try to finish all the in-flight or already serving request
until timeout happens. And also it will not send any new request to unhealthy or de-registered instances.


Related Questions


Question : Which of the following is used by Amazon RDS to provide high availability and failover support for DB instances
  : Which of the following is used by Amazon RDS to provide high availability and failover support for DB instances
1. Multi Region Deployment of DB Instances
2. Multi-AZ deployments
3. Access Mostly Uused Products by 50000+ Subscribers
4. Read Replicas


Question : What would you use to categorize your EC resources by application or purpose?

 :  What would you use to categorize your EC resources by application or purpose?
1. Instance Names
2. Filters
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of Above



Question :

What is the maximum write throughput that can be provisioned for a single DynamoDB table?

 :
1. 1,000 write capacity units
2. 10,000 write capacity units
3. Access Mostly Uused Products by 50000+ Subscribers
4. There are throughput capacity limits but after 10,000 AWS must be contacted first.


Question :

What is Amazon S3 RRS?


 :
1. Reduced Redundancy Storage with 99.99% durability
2. Reduced Redundancy Storage with 99.999999999% durability
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the above


Question :

What happens to an instances ephemeral storage when an instance store instance is stopped?


 :
1. All data is lost (Data only exists for life of an instance).
2. It persists.
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the above



Question :

What is the maximum size of an EBS storage device?


 :
1. 1GB
2. 1TB
3. Access Mostly Uused Products by 50000+ Subscribers
4. 2TB