Premium

AWS Certified Solutions Architect – Associate Questions and Answers (Dumps and Practice Questions)



Question : With regards to DynamoDB, For each secondary index, you must specify which the following:

1. The type of index to be created
2. A name for the index
3. Access Mostly Uused Products by 50000+ Subscribers
4. For a global secondary index, you must specify read and write capacity unit settings
5. The key schema for the index.
  : With regards to DynamoDB, For each secondary index, you must specify which the following:
1. 1,2,3,5
2. 1,2,3,4
3. Access Mostly Uused Products by 50000+ Subscribers
4. 1,2,5
5. All 1,2,3,4,5

Correct Answer : Get Lastest Questions and Answer :

Explanation: For each secondary index, you must specify the following:

The type of index to be created either a local secondary index or a global secondary index.
A name for the index. The naming rules for indexes are the same as those for tables, as listed in Limits in DynamoDB. The name must be unique for the
table
it is associated with, but you can use the same name for indexes that are associated with different tables.
The key schema for the index. Every attribute in the index key schema must be a scalar data type, not a multi value set. Other requirements for the key
schema depend on the type of index:
For a local secondary index, the hash key must be the same as the tables hash key, and the range key must be a non key table attribute.
For a global secondary index, the hash key can be any table attribute. A range key is optional, and it too can be any table attribute.
Additional attributes, if any, to project from the table into the index. These attributes are in addition to the table key attributes, which are
automatically projected into every index. You can project attributes of any data type, including scalar data types and multi-valued sets.
The provisioned throughput settings for the index, if necessary:
For a local secondary index, you do not need to specify read and write capacity unit settings. Any read and write operations on a local secondary index
draw
from the provisioned throughput settings of its parent table.
For a global secondary index, you must specify read and write capacity unit settings. These provisioned throughput settings are independent of the
tables
settings.
For maximum query flexibility, you can create up to five local secondary indexes and up to five global secondary indexes per table.






Question : Which of the following storage classes are available in AWS
 : Which of the following storage classes are available in AWS
1. Standard S3 storage
2. RRS , Reduced Redundancy Storage
3. Access Mostly Uused Products by 50000+ Subscribers
4. All of the above

Correct Answer : Get Lastest Questions and Answer :

Amazon S3 is storage for the Internet. It is designed to make web-scale computing easier for developers.

Amazon S3 provides a simple web-services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It
gives
any developer access to the same highly scalable, reliable, secure, fast, inexpensive infrastructure that Amazon uses to run its own global network of
web
sites. The service aims to maximize benefits of scale and to pass those benefits on to developers.

Reduced Redundancy Storage (RRS) is a new storage option within Amazon S3 that enables customers to reduce their costs by storing non-critical,
reproducible
data at lower levels of redundancy than Amazon S3s standard storage. It provides a cost effective, highly available solution for distributing or sharing
content that is durably stored elsewhere, or for storing thumbnails, transcoded media, or other processed data that can be easily reproduced. Amazon S3s
standard and reduced redundancy options both store data in multiple facilities and on multiple devices, but with RRS, data is replicated fewer times, so
the
cost is less. Amazon S3 standard storage is designed to provide 99.999999999% durability and to sustain the concurrent loss of data in two facilities,
while
RRS is designed to provide 99.99% durability and to sustain the loss of data in a single facility. Both the standard and RRS storage options are
designed to
be highly available, and both are backed by Amazon S3s Service Level Agreement.

Amazon Glacier is an extremely low-cost storage service that provides secure and durable storage for data archiving and backup. In order to keep costs
low,
Amazon Glacier is optimized for data that is infrequently accessed and for which retrieval times of several hours are suitable. With Amazon Glacier,
customers can reliably store large or small amounts of data for as little as $0.01 per gigabyte per month, a significant savings compared to on-premises
solutions.






Question : A read only news reporting site with a combined web and application tier and a database tier that receives large and unpredictable traffic
demands must be able to
respond to these traffic fluctuations automatically. What AWS services should be used meet these requirements?
 : A read only news reporting site with a combined web and application tier and a database tier that receives large and unpredictable traffic
1. Stateless instances for the web and application tier synchronized using Elasticache Memcached in an autoscaling group
monitored with CloudWatch. And RDS with read replicas
2. Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch and RDS with read replicas
3. Access Mostly Uused Products by 50000+ Subscribers
4. Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with
CloudWatch and multi-AZ RDS



Correct Answer : Get Lastest Questions and Answer : Benefit of Read Replica is : You can reduce the load on your source DB Instance by routing read queries from your applications to the
read replica. Read replicas allow you to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.

Increased Availability : Read replicas in Amazon RDS for MySQL and PostgreSQL provide a complementary availability mechanism to Amazon RDS Multi-AZ
Deployments. You can use read replica promotion as a data recovery scheme if the source DB instance fails; however, if your use case requires synchronous
replication, automatic failure detection, and failover, we recommend that you run your DB instance as a Multi-AZ deployment instead.

Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads (such as social networking,
gaming,
media sharing and Q and A portals) or compute-intensive workloads (such as a recommendation engine). Caching improves application performance by storing
critical pieces of data in memory for low-latency access. Cached information may include the results of I/O-intensive database queries or the results of
computationally-intensive calculations. Applications needing a data structure server, will find the Redis engine most useful.







Related Questions


Question : An organization has configured a VPC with an Internet Gateway (IGW). pairs of public and private
subnets (each with one subnet per Availability Zone), and an Elastic Load Balancer (ELB) configured to use the public subnets The applications web tier
leverages the ELB.
Auto Scaling and a multi-AZ RDS database instance The organization would like to eliminate any potential single points ft failure in this design.
What step should you take to achieve this organization's objective?
  : An organization has configured a VPC with an Internet Gateway (IGW). pairs of public and private
1. Nothing, there are no single points of failure in this architecture.
2. Create and attach a second IGW to provide redundant internet connectivity.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Create a second multi-AZ RDS instance in another Availability Zone and configurereplication to provide a redundant database.


Question :

Your company has a backup policy that requires backed up data to be "quickly" accessible within minutes for the first 6 months
of the data life. After the first 6 months your data can be archived. How would you automatically handle this procedure?

  :
1. Write a script that moves data stored on an EBS volume to S3 after 6 months.
2. Use Amazon Direct Connect to store the data onsite and back it up to S3.
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the above



Question :

To enable RDS snapshots what storage engine must MySQL tables be configured with?
  :
1. InnoDB
2. MyISAM
3. Access Mostly Uused Products by 50000+ Subscribers
4. RDS


Question :

The base URI for all requests for instance metadata is _____.


  :
1. http://169.254.169.254/latest/
2. http://10.0.0.1/latest/
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the above


Question : RDS Snapshots are automatically stored in ____.

  : RDS Snapshots are automatically stored in ____.
1. Elastic Block Store
2. RDS Storage Type
3. Access Mostly Uused Products by 50000+ Subscribers
4. S3


Question :

What command would you use to create a snapshot using AWS command line tools?

  :
1. ec2-deploy-snapshot
2. ec2-create-snapshot
3. Access Mostly Uused Products by 50000+ Subscribers
4. ec2-make-snapshot