Premium

AWS Certified Solutions Architect – Associate Questions and Answers (Dumps and Practice Questions)



Question : In DynamoDB for which index following statement correctly applies "The hash key of the index is the same attribute as the hash key of the table. The range key can be any
scalar table attribute".
 : In DynamoDB for which index following statement correctly applies
1. Local Secondary Index
2. Local Primary Indes
3. Access Mostly Uused Products by 50000+ Subscribers
4. Global Primary Index

Correct Answer : Get Lastest Questions and Answer :
Exp: DynamoDB supports two types of secondary indexes:
Local secondary index an index that has the same hash key as the table, but a different range key.
A local secondary index is local in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
Global secondary index an index with a hash and range key that can be different from those on the table.
A global secondary index is considered global because queries on the index can span all of the data in a table, across all partitions.

Local secondary index
The hash key of the index is the same attribute as the hash key of the table. The range key can be any scalar table attribute.

Global secondary index
The index hash key and range key (if present) can be any scalar table attributes.





Question : Which of the following are use cases for Amazon DynamoDB? Choose answers
A. Storing BLOB data.
B. Managing web sessions.
C. Storing JSON documents.
D. Storing metadata for Amazon S3 objects.
E. Running relational joins and complex updates.
F. Storing large amounts of infrequently accessed data.


  :  Which of the following are use cases for Amazon DynamoDB? Choose  answers
1. A,B,C
2. B,C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,E,D
5. B,D,E
Ans : 1
Exp : Q: When should I use Amazon DynamoDB vs Amazon S3?

Amazon DynamoDB stores structured data, indexed by primary key, and allows low latency read and write access to items ranging from 1 byte up to 400KB.
Amazon
S3 stores unstructured blobs and suited for storing large objects up to 5 TB. In order to optimize your costs across AWS services, large objects or
infrequently accessed data sets should be stored in Amazon S3, while smaller data elements or file pointers (possibly to Amazon S3 objects) are best
saved in
Amazon DynamoDB.

Tomcat applications often store session-state data in memory. However, this approach doesn't scale well; once the application grows beyond a single web
server, the session state must be shared between servers. A common solution is to set up a dedicated session-state server with MySQL. This approach also
has
drawbacks: you must administer another server, the session-state server is a single pointer of failure, and the MySQL server itself can cause performance
problems.

DynamoDB, a NoSQL database store from Amazon Web Services (AWS), avoids these drawbacks by providing an effective solution for sharing session state
across
web servers.

JSON Document Support
You can now store entire JSON-formatted documents as single DynamoDB items (subject to the newly increased 400 KB size limit ).

This new document-oriented support is implemented in the AWS SDKs and makes use of some new DynamoDB data types. The document support (available now in
the
AWS SDK for Java, the SDK for .NET, the SDK for Ruby, and an extension to the SDK for JavaScript in the Browser) makes it easy to map your JSON data or
native language object on to DynamoDB's native data types and for supporting queries that are based on the structure of your document. You can also view
and
edit JSON documents from within the AWS Management Console.

With this addition, DynamoDB becomes a full-fledged document store. Using the AWS SDKs, it is easy to store JSON documents in a DynamoDB table while
preserving their complex and possibly nested "shape." The new data types could also be used to store other structured formats such as HTML or XML by
building
a very thin translation layer.



Question : You are deploying an application to track GPS coordinates of delivery trucks in the United
States. Coordinates are transmitted from each delivery truck once in every three seconds.
You need to design an architecture that will enable real-time processing of these
coordinates from multiple consumers. Which service should you use to implement data
ingestion?


  :  Which of the following are use cases for Amazon DynamoDB? Choose  answers
1. Amazon Kinesis
2. AWS Data Pipeline
3. Access Mostly Uused Products by 50000+ Subscribers
4. Amazon Simple Queue Service
Ans : 1
Exp : What Is Amazon Kinesis?
Use Amazon Kinesis to collect and process large streams of data records in real time. You'll create data-processing applications, known as Amazon Kinesis
applications. A typical Amazon Kinesis application takes data from data generators called producers and puts it into an Amazon Kinesis stream as data
records. These applications can use the Amazon Kinesis Client Library, and they can run on Amazon EC2 instances. The processed records can be sent to
dashboards, used to generate alerts, dynamically change pricing and advertising strategies, or send data to a variety of other AWS services.

What Can I Do with Amazon Kinesis?
You can use Amazon Kinesis for rapid and continuous data intake and aggregation. The type of data used includes IT infrastructure log data, application
logs,
social media, market data feeds, and web clickstream data. Because the response time for the data intake and processing is in real time, the processing
is typically lightweight.
The following are typical scenarios for using Amazon Kinesis:
Accelerated log and data feed intake and processing
You can have producers push data directly into a stream. For example, push system and application logs and they'll be available for processing in
seconds. This prevents the log data from being lost if the front end or application server fails. Amazon Kinesis provides accelerated data feed
intake
because you don't batch the data on the servers before you submit it for intake.
Real-time metrics and reporting
You can use data collected into Amazon Kinesis for simple data analysis and reporting in real time. For example, your data-processing application can
work on metrics and reporting for system and application logs as the data is streaming in, rather than wait to receive batches of data.
Real-time data analytics
This combines the power of parallel processing with the value of real-time data. For example, process website clickstreams in real time, and then
analyze
site usability engagement using multiple different Amazon Kinesis applications running in parallel.
Complex stream processing
You can create Directed Acyclic Graphs (DAGs) of Amazon Kinesis applications and data streams. This typically involves putting data from multiple
Amazon
Kinesis applications into another stream for downstream processing by a different Amazon Kinesis application.




Question : You are working in a credit card company, which continuously process the credit card data and its spending. They can use this data for
analytics as well, to analyze any fraud or something. Being a very critical and sensitive information, what all the steps you will take care so that
data stored in the RDS will be secured?
A. You will be putting RDS instance in private subnet and configure security groups and network access control list. So that only permitted port and
protocol can access the data.
B. You will be having proper grants created on the tables in database.
C. You will be having IAM policy created, so that only permitted users can access the RDS instances.
D. You must have installed anti-malware on the RDS instance with the help of AWS support.
E. You will always be using VPN connection to access these RDS instances.

  :  Which of the following are use cases for Amazon DynamoDB? Choose  answers
1. A,B,C
2. B,C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,D,E
5. A,C,E

Correct Answer : Get Lastest Questions and Answer :
Explanation: Please remember following points with regards to AWS RDS instance.
- RDS is a managed service, you will not get OS level access. All the malware protection is AWS responsibility and not yours, that’s the reason you
want managed service.
- If you have permissions to access RDS instances than, you don’t need VPN. However, access VPN is also possible.
- Yes, you should keep your RDS instances in private subnet. You are not going to expose them to entire world.
- Any AWS resource access can be controlled by IAM. Hence, you will be creating proper IAM permissions for this AWS instance.
- As you know, on database level also you will be creating various permissions. Hence, only selected people can do some major activity like delete,
drop and altering the data.





Question : You have been working with a training websites, which becomes popular recently. This website is hosted on AWS, which uses RDS (MySQL)
based. Your company recently got good investments and you decided to put the advertisement on the news websites as well as on TV and Local Radio
commercials. You have been informed this will happen in next three days. As you know, this will bring huge traffic on the website. So how will you deal
with this situation, select the correct one which can help?
A. If RDS instance is on lower hardware configurations, than you can think of increasing the memory and CPU to vertically scaling.
B. You can think of having horizontally scaling the RDS, by adding more read replicas and diverting the read traffic on read replicas.
C. You can also think of using underline storage type as a General purpose SSD, so that it can burst when there is heavy read.
D. You will be replacing RDS to DynamoDB, which can give better performance.
E. You will be creating similar solutions (copy of existing one) and during the event you can transfer high traffic to new solutions and all the writes
will be on existing solution.

 : You have been working with a training websites, which becomes popular recently. This website is hosted on AWS, which uses RDS (MySQL)
1. A,B,C
2. B,C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,D,E
5. A,C,E

Correct Answer : Get Lastest Questions and Answer :
Explanation: As question is asking for the occasional traffic increase and not permanently. Hence, we need storage which can burst whenever
require for that we can use General Purpose SDD. There are always one option available to scale vertically, if limit had not reached and you can also
consider the same by increasing the RAM and CPU. As on the event it is mostly going to be read heavy, hence for the RDS best solution is to use RDS,
read replica and all the read can be directed to the read replicas.






Related Questions


Question : You have identified network throughput as a bottleneck on your m.small EC instance when
uploading data Into Amazon S3 In the same region.
How do you remedy this situation?

  : You have identified network throughput as a bottleneck on your m.small EC instance when
1. Add an additional ENI
2. Change to a larger Instance
3. Access Mostly Uused Products by 50000+ Subscribers
4. Use EBS PIOPS on the local volume



Question : When attached to an Amazon VPC which two components provide connectivity with external networks? Choose answers
A. Elastic IPS (EIP)
B. NAT Gateway (NAT)
C. Internet Gateway {IGW)
D. Virtual Private Gateway (VGW)
  : When attached to an Amazon VPC which two components provide connectivity with external networks? Choose  answers
1. A,D
2. B,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. C,D



Question : Your application currently leverages AWS Auto Scaling to grow and shrink as load Increases/ decreases and has been performing well Your
marketing team expects
a steady ramp up in traffic to follow an upcoming campaign that will result in a 20x growth in traffic over 4 weeks Your forecast for the approximate number
of Amazon EC2
instances necessary to meet the peak demand is 175. What should you do to avoid potential service disruptions during the ramp up in traffic?
  : Your application currently leverages AWS Auto Scaling to grow and shrink as load Increases/ decreases and has been performing well Your
1. Ensure that you have pre-allocated 175 Elastic IP addresses so that each server will be able to obtain one as it launches
2. Check the service limits in Trusted Advisor and adjust as necessary so the forecasted count remains within limits.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Pre-warm your Elastic Load Balancer to match the requests per second anticipated during peak demand prior to the marketing campaign



Question : You have an Auto Scaling group associated with an Elastic Load Balancer (ELB). You have noticed that instances launched via the Auto Scaling
group are being marked
unhealthy due to an ELB health check, but these unhealthy instances are not being terminated What do you need to do to ensure trial instances marked
unhealthy by the ELB will be
terminated and replaced?
  : You have an Auto Scaling group associated with an Elastic Load Balancer (ELB). You have noticed that instances launched via the Auto Scaling
1. Change the thresholds set on the Auto Scaling group health check
2. Add an Elastic Load Balancing health check to your Auto Scaling group
3. Access Mostly Uused Products by 50000+ Subscribers
4. Change the health check set on the Elastic Load Balancer to use TCP rather than HTTP checks



Question : Which two AWS services provide out-of-the-box user configurable automatic backup-as-a-service and backup rotation options?

Choose 2 answers
A. Amazon S3
B. Amazon RDS
C. Amazon EBS
D. Amazon Red shift


  : Which two AWS services provide out-of-the-box user configurable automatic backup-as-a-service and backup rotation options?
1. A,B
2. B,C
3. Access Mostly Uused Products by 50000+ Subscribers
4. C,D
5. A,C


Question :

How would you move an EBS volume to another availability zone?


  :
1. Right click on the volume and select .
2. Create a snapshot of the volume and then create a volume based off the snapshot in the new availability zone.
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the above