Premium

AWS Certified Solutions Architect – Associate Questions and Answers (Dumps and Practice Questions)



Question : If you have a Amazon EBS volume with GiB of data, but only GiB of data have changed since your last snapshot,
how much data wull be written to Amazon S3 while snapshot refresh.

  : If you have a Amazon EBS volume with  GiB of data, but only  GiB of data have changed since your last snapshot,
1. To create clean copy of backup, it has to delete 100 GiB data and copy full snapshot
2. It will copy on changed 5 GiB data
3. Access Mostly Uused Products by 50000+ Subscribers
4. It will create 105 GiB size of snapshot

Correct Answer : Get Lastest Questions and Answer :


Explanation: Amazon EBS provides the ability to create snapshots (backups) of any Amazon EC2 volume and write a copy of the data in the volume to Amazon S3, where
it is stored redundantly in multiple Availability Zones. The volume does not need be attached to a running instance in order to take a snapshot. As you
continue to write data to a volume, you can periodically create a snapshot of the volume to use as a baseline for new volumes. These snapshots can be used to
create multiple new Amazon EBS volumes, expand the size of a volume, or move volumes across Availability Zones.

When you create a new volume from a snapshot, it an exact copy of the original volume at the time the snapshot was taken. By optionally specifying a
different volume size or a different Availability Zone, you can use this functionality to increase the size of an existing volume or to create duplicate
volumes in new Availability Zones. The snapshots can be shared with specific AWS accounts or made public. When you create snapshots, you incur charges in
Amazon S3 based on the volumes total size. For a successive snapshot of the volume, you are only charged for any additional data beyond the volumes original
size.

Amazon EBS snapshots are incremental backups, meaning that only the blocks on the volume that have changed after your most recent snapshot are saved. If you
have a volume with 100 GiB of data, but only 5 GiB of data have changed since your last snapshot, only the 5 GiB of modified data is written to Amazon S3.
Even though snapshots are saved incrementally, the snapshot deletion process is designed so that you need to retain only the most recent snapshot in order to
restore the volume.




Question :

You can restore an Amazon EBS volume with data from a snapshot stored in Amazon S3. You need to know the ________ of the snapshot you wish to restore your
volume from and you need to have access permissions for the snapshot.


 :
1. date and time
2. id
3. Access Mostly Uused Products by 50000+ Subscribers
4. volume id



Correct Answer : Get Lastest Questions and Answer :


Explanation: You can restore an Amazon EBS volume with data from a snapshot stored in Amazon S3. You need to know the ID of the snapshot you wish to restore your
volume from and you need to have access permissions for the snapshot.

New volumes created from existing Amazon S3 snapshots load lazily in the background. This means that after a volume is created from a snapshot, there is no
need to wait for all of the data to transfer from Amazon S3 to your Amazon EBS volume before your attached instance can start accessing the volume and all
its data. If your instance accesses data that hasn't yet been loaded, the volume immediately downloads the requested data from Amazon S3, and continues
loading the rest of the data in the background.

When a block of data on a newly restored Amazon EBS volume is accessed for the first time, you might experience longer than normal latency. To avoid the
possibility of increased read or write latency on a production workload, you should first access all of the blocks on the volume to ensure optimal
performance; this practice is called pre-warming the volume.






Question : Creating a DB Snapshot creates a backup of your DB Instance. Creating this backup on a ______ DB Instance results
in a brief I/O suspension that typically lasting no more than a few minutes.


 : Creating a DB Snapshot creates a backup of your DB Instance. Creating this backup on a ______ DB Instance results
1. Mult-Region
2. Multi-AZ
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the above

Correct Answer : Get Lastest Questions and Answer :

When you create a DB snapshot, you need to identify which DB instance you are going to back up, and then give your DB snapshot a name so you can restore from
it later.

Note

Creating a DB snapshot creates a backup of your DB instance. Creating this backup on a Single-AZ DB instance results in a brief I/O suspension that typically
lasting no more than a few minutes. Multi-AZ DB instances are not effected by this I/O suspension since the backup is taken on the standby.


Related Questions


Question : You are working in a media company for their website deployment and you have deployed this in AWS. You also want to enable auto scaling so
that whenever the load is high on the website it can scale and handle the load. Which of the following is a mandatory parameters for launch configuration?
A. Launch Configuration Name
B. AMI
C. Instance Type
D. Security Group
E. EBS
F. S3 Bucket
  : You are working in a media company for their website deployment and you have deployed this in AWS. You also want to enable auto scaling so
1. A,B,C
2. B,C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. D,E,F
5. A,C,E


Question : What is the maximum time can I keep my messages in Amazon SQS queues?


  : What is the maximum time can I keep my messages in Amazon SQS queues?
1. 1 Week
2. 2 Weeks
3. Access Mostly Uused Products by 50000+ Subscribers
4. 4 Days


Ans : 2
Exp :

The SQS message retention period is configurable and can be set anywhere from 1 minute to 2 weeks. The default is 4 days and once
the message retention limit is reached your messages will be automatically deleted. The option for longer message retention provides
greater flexibility to allow for longer intervals between message production and consumption.




Question : You are working in a company which sales events tickets e.g. Movies, Events and Shows. Your company is entirely using AWS resources for
this. You know, every weekend Friday morning details of the new shows/events/movies will booking will be opened. As soon as this happen you need many
EC2 instances to serve the website traffic and processing. You have enabled the auto scaling, which is the mandatory requirement for auto scaling group
configuration?
A. You must have define the minimum number of EC2 instances should be configured in scaling group.
B. You must enable the Health Check of all the instances
C. You must define, what the desired capacity to website work in normal mode.
D. You must provide the launch configuration to launch new EC2 instance.

  : What is the maximum time can I keep my messages in Amazon SQS queues?
1. A,B
2. B,C
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,D
5. B,D


Question : Which of the following service runs as a local system and performs various functions to prepare an instance when it first boots up?


  : Which of the following service runs as a local system and performs various functions to prepare an instance when it first boots up?
1. AMIConfig Service
2. EBSConfig Service
3. Access Mostly Uused Products by 50000+ Subscribers
4. Ec2Config Service

Ans : 4

Exp : An Amazon EC2 Windows AMI comes with an additional service installed, the Ec2Config Service. The Ec2Config service runs as a
local system and performs various functions to prepare an instance when it first boots up. After the devices have been mapped with the drives,
the Ec2Config service then initializes and mounts the drives. The root drive is initialized and mounted as c:\.
The instance stores that comes attached to the instance are initialized and mounted as d:\, e:\, and so on. By default,
when an Amazon EBS volume is attached to a Windows instance, it may show up as any drive letter on the instance.
You can change the settings of the Ec2Config service to set the drive letters of the Amazon EBS volumes per your specifications.

Important

The volumes page in the Amazon EC2 console has been redesigned, and you can switch between the new and old interfaces by
clicking the link in the preview message at the top of the console page. You can switch back to the old interface during
the trial period; however, this topic may sometimes describe the new interface only.




Question : In case of SQS, What happens if I issue a DeleteMessage request on a previously deleted message?


  : Which of the following service runs as a local system and performs various functions to prepare an instance when it first boots up?
1. SQS returns a "error" response.
2. SQS returns a "success" response.
3. Access Mostly Uused Products by 50000+ Subscribers
4. SQS returns a "message does not exist" response.


Ans : 2
Exp : SQS returns a "success" response.

Under rare circumstances you might receive a previously deleted message again. This can occur in the rare situation
in which a DeleteMessage operation doesn't delete all copies of a message because one of the servers in the distributed
Amazon SQS system isn't available at the time of the deletion. That message copy can then be delivered again. You should
design your application so that no errors or inconsistencies occur if you receive a deleted message again.



Question : Amazon SQS uses proven cryptographic methods to authenticate your identity, through the use of your

  : Which of the following service runs as a local system and performs various functions to prepare an instance when it first boots up?
1. Access Key ID and request signature
2. X.509 certificate
3. Access Mostly Uused Products by 50000+ Subscribers
4. Both "Access Key ID and request signature" and X.509 certificate


Ans : 3
Exp :

Authentication mechanisms are provided to ensure that messages stored in Amazon SQS queues are secured against unauthorized access.
Only the AWS account owners can access the queues they create.

Amazon SQS uses proven cryptographic methods to authenticate your identity, either through the use of your Access Key ID and request signature,
or through the use of an X.509 certificate.

For additional security, you could build your application to encrypt messages before they are placed in a queue.



Question : You have configured, your analytics workload to be processed by Hadoop Infrastructure setup on the AWS. Based on the data volumes it auto
scale and launches new AWS Spot instances.
You are running entire workload in a single region. You have configured your auto scaling to have upto 100 EC2 instances. You run this entire batch
processing in India night time.
However, you have seen that your process is quite slow and only scale upto 20 EC2 instances. What could be the region ?


  : Which of the following service runs as a local system and performs various functions to prepare an instance when it first boots up?
1. You can not use Spot Instances more than 20 for auto scaling.
2. There is some mis-configuration for your auto scaling, which needs to be corrected
3. Access Mostly Uused Products by 50000+ Subscribers
4. Becuase, you have capped the Bill amount per day. Which does not allow new instaces to be launched.
5. There is a limit for default number of EC2 instances per region. Which is currently 20
Ans :5
Exp : By default, there is an account limit of 20 Spot instances per region. If you terminate your Spot instance but do not cancel the request, the
request
counts against this limit until Amazon EC2 detects the termination and closes the request.

Spot instance limits are dynamic. When your account is new, your limit might be lower than 20 to start, but increase over time. In addition, your account
might have limits on specific Spot instance types.

Auto Scaling Limits
Resource Default Limit
Launch configurations per region 100
Auto Scaling groups per region 20
Scaling policies per Auto Scaling group 50
Scheduled actions per Auto Scaling group 125
Lifecycle hooks per Auto Scaling group 50
SNS topics per Auto Scaling group 10
Step adjustments per scaling policy 20




Question : Your registered EC instances, with ELB can fail the health check performed by your load balancer,
which of the following reason could be correct

1. Elastic Load Balancing terminates a connection if it is idle for more than 60 seconds.
2. When the load balancer performs a health check, the instance may be under significant load and may take longer than your configured timeout interval
to
respond
3. Access Mostly Uused Products by 50000+ Subscribers
4. SSL certificate is not correct


  : Which of the following service runs as a local system and performs various functions to prepare an instance when it first boots up?
1. 1,2,3
2. 2,3,4
3. Access Mostly Uused Products by 50000+ Subscribers
4. 3,4,1
5. 1,2,3,4

Ans : 5
Exp : Registered instances failing load balancer health check
Your registered instances can fail the health check performed by your load balancer for one or more of the following reasons:

•Problem: Instance(s) closing the connection to the load balancer.
Cause: Elastic Load Balancing terminates a connection if it is idle for more than 60 seconds. The idle connection is established when there is no read or
write event taking place on both the sides of the load balancer (client to load balancer and load balancer to the back-end instance).
Solution: Set the idle timeout to at least 60 seconds on your registered instances.

•Problem: Responses timing out.
Cause: When the load balancer performs a health check, the instance may be under significant load and may take longer than your configured timeout
interval
to respond.
Solution: Try adjusting the timeout on your health check settings.

•Problem: Non-200 response received.
Cause: When the load balancer performs an HTTP/HTTPS health check, the instance must return a 200 HTTP code. Any other response code will be considered a
failed health check.
Solution: Search your application logs for responses sent to the health check requests.

•Problem: Failing public key authentication.
Cause: If you are using an HTTPS or SSL load balancer with back-end authentication enabled, the public key authentication will fail if the public key on
the
certificate does not match the public key configured on the load balancer.
Solution: Check if your SSL certificate needs to be updated. If your SSL certificate is current, try re-installing the certificate on your load balancer.



Question : During a scale storage operation, the Your DB Instance will was not available beacause it has staus as


  : Which of the following service runs as a local system and performs various functions to prepare an instance when it first boots up?
1. storage-empty
2. storage-no-space
3. Access Mostly Uused Products by 50000+ Subscribers
4. storage-full

Ans : 4
Exp : Analyzing your Database Workload on a DB Instance Using SQL Server Tuning Advisor
The Database Engine Tuning Advisor is a client application provided by Microsoft that analyzes database workload and recommends
an optimal set of indexes for your SQL Server databases based on the kinds of queries you run. Like SQL Server Management Studio,
you run Tuning Advisor from a client computer that connects to your RDS DB Instance that is running SQL Server. The client computer
can be a local computer that you run on premises within your own network or it can be an Amazon EC2 Windows instance that is running
in the same region as your RDS DB Instance.

This section shows how to capture a workload for Tuning Advisor to analyze. This is the preferred process for capturing
a workload because RDS restricts host access to the SQL Server instance. The full documentation on Tuning Advisor can be found on MSDN.

To use Tuning Advisor, you must provide what is called a workload to the advisor. A workload is a set of Transact-SQL statements
that execute against a database or databases that you want to tune. Database Engine Tuning Advisor uses trace files, trace tables,
Transact-SQL scripts, or XML files as workload input when tuning databases. When working with RDS, a workload can be a file on a client computer
or a database table on an RDS SQL Server DB accessible to your client computer. The file or the table must contain queries against the databases
you want to tune in a format suitable for replay.

For Tuning Advisor to be most effective, a workload should be as realistic as possible. You can generate a workload file or
table by performing a trace against your DB Instance. While a trace is running, you can either simulate a load on your DB Instance
or run your applications with a normal load.

There are two types of traces: client-side and server-side. A client-side trace is easier to set up and you can watch trace events
being captured in real-time in SQL Server Profiler. A server-side trace is more complex to set up and requires some Transact-SQL scripting.
In addition, because the trace is written to a file on the RDS DB Instance, storage space is consumed by the trace. It is important
to track of how much storage space a running server-side trace uses because the DB Instance could enter a storage-full state and
would no longer be available if it runs out of storage space.

For a client-side trace, when a sufficient amount of trace data has been captured in the SQL Server Profiler, you can then generate
the workload file by saving the trace to either a file on your local computer or in a database table on an DB Instance that is available
to your client computer. The main disadvantage of using a client-side trace is that the trace may not capture all queries when under heavy
loads. This could weaken the effectiveness of the analysis performed by the Database Engine Tuning Advisor. If you need to run a trace
under heavy loads and you want to ensure that it captures every query during a trace session, you should use a server-side trace.

For a server-side trace, you must get the trace files on the DB Instance into a suitable workload file or you can save the trace to
a table on the DB Instance after the trace completes. You can use the SQL Server Profiler to save the trace to a file on your local
computer or have the Tuning Advisor read from the trace table on the DB Instance.





Question : In VPC, A NAT instance must be able to send and receive traffic when the source or destination is not itself. Therefore


  : Which of the following service runs as a local system and performs various functions to prepare an instance when it first boots up?
1. Source/Destination check should be enable, which is desabled by default.
2. Source/Destination check should be disabled.
3. Access Mostly Uused Products by 50000+ Subscribers
4. NAT instance should have public ip address configured.
Ans : 2
Exp :
Each EC2 instance performs source/destination checks by default. This means that the instance must be the source or destination
of any traffic it sends or receives. However, a NAT instance must be able to send and receive traffic when the source or destination
is not itself. Therefore, you must disable source/destination checks on the NAT instance.

You can disable the SrcDestCheck attribute for a NAT instance that's either running or stopped using the console or the command line.





Question : For VPC instance , select statement which applies correctly


  : Which of the following service runs as a local system and performs various functions to prepare an instance when it first boots up?
1. can change the tenancy of an instance any time after you launch it.
2. can't change the tenancy of an instance after you launch it.
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the above
Ans : 2
Exp : Dedicated Instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that's dedicated to a single customer.
Your Dedicated Instances are physically isolated at the host hardware level from your instances that aren't Dedicated Instances and
from instances that belong to other AWS accounts.

Each instance that you launch into a VPC has a tenancy attribute. You can't change the tenancy of an instance after you launch it. This attribute has the
following values.

Value Default
Description Your instance runs on shared hardware.

Value dedicated
Description Your instance runs on single-tenant hardware.

Each VPC has a related instance tenancy attribute. You can't change the instance tenancy of a VPC after you create it. This attribute has the following
values.

Value default
Description An instance launched into the VPC is a Dedicated Instance if the tenancy attribute for the instance is dedicated.

Value dedicated
Description All instances launched into the VPC are Dedicated Instances, regardless of the value of the tenancy attribute for the instance.


If you are planning to use Dedicated Instances, you can implement them using either method:
•Create the VPC with the instance tenancy set to dedicated (all instances launched into this VPC are Dedicated Instances).
•Create the VPC with the instance tenancy set to default, and specify dedicated tenancy for any instances that should be Dedicated Instances when you
launch
them




Question : A company has configured and peered two VPCs: VPC- and VPC-. VPC- contains only
private subnets, and VPC-2 contains only public subnets. The company uses a single AWS
Direct Connect connection and private virtual interface to connect their on-premises
network with VPC-1. Which two methods increases the fault tolerance of the connection to
VPC-1? Choose 2 answers
A. Establish a hardware VPN over the internet between VPC-2 ana the on-premises network.
B. Establish a hardware VPN over the internet between VPC-1 and the on-premises network.
C. Establish a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2.
D. Establish a new AWS Direct Connect connection and private virtual interface in a different AWS region than VPC-1.
E. Establish a new AWS Direct Connect connection and private virtual interface in the same AWS region as VPC-1



  : Which of the following service runs as a local system and performs various functions to prepare an instance when it first boots up?
1. A,B
2. B,C
3. Access Mostly Uused Products by 50000+ Subscribers
4. D,E
Ans : 2
Exp :




Question : You are working in a Big Finance company, where it is mandatory that user login should have based on temporary tokens and authentication
will be based in user information stored in LDAP. Which of the following satisfies the given requirement?
A. EC2 Roles
B. MFA
C. Root User
D. Federation
E. Security Group

  : Which of the following service runs as a local system and performs various functions to prepare an instance when it first boots up?
1. A,B
2. B,C
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,D
5. B,D


Question : You are the Root owner of the AWS accounts, and recently a new AWS administrator joined your team and you are worried about the protecting
this new user credentials. How would make sure, it is protected?
A. Enable MFA for this Admin Account
B. You can limit that account should not be accessed outside of the city
C. You will be creating a Password policy, like 30days password change and what all characters are allowed etc.
D. Restrict the IP addresses from where this account can be logged in.

  : You are the Root owner of the AWS accounts, and recently a new AWS administrator joined your team and you are worried about the protecting
1. A,B,C
2. B,C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,B,D


Question : Which of below mentioned metrics cannot have a CloudWatch Alarm?

 :  Which of below mentioned metrics cannot have a CloudWatch Alarm?
1. RRS lost object
2. EC2 instance StatusCheckFailed
3. Access Mostly Uused Products by 50000+ Subscribers
4. Auto Scaling group CPU utilization

Ans : 1
Exp :

Amazon CloudWatch provides monitoring for AWS cloud resources and the applications customers run on AWS. Developers and system administrators can
use it
to collect and track metrics, gain insight, and react immediately to keep their applications and businesses running smoothly. Amazon CloudWatch
monitors
AWS resources such as Amazon EC2 and Amazon RDS DB instances, and can also monitor custom metrics generated by a customers applications and services.
With Amazon CloudWatch, you gain system-wide visibility into resource utilization, application performance, and operational health.

Amazon CloudWatch provides a reliable, scalable, and flexible monitoring solution that you can start using within minutes. You no longer need to set up,
manage, or scale your own monitoring systems and infrastructure. Using Amazon CloudWatch, you can easily monitor as much or as little metric data as you
need. Amazon CloudWatch lets you programmatically retrieve your monitoring data, view graphs, and set alarms to help you troubleshoot, spot trends, and
take
automated action based on the state of your cloud environment.

The user can set an alarm on all the CloudWatch metrics, such as the EC2 CPU utilization or the Auto Scaling group metrics.
CloudWatch does not support AWS S3. Thus. it cannot set an alarm on the RRS lost objects.





Question : . You have been asked by the Chief architect to implement tighter security control on each EC instance, because each EC instance does
very critical data processing and if it leaked, it can have very huge damage. He suggested it is optional, if you want to keep any security on subnet
level. Which of the following you will implement in this case mandatorily?


 :  Which of below mentioned metrics cannot have a CloudWatch Alarm?
1. You will implement NACL

2. You will have setup Security Group

3. Access Mostly Uused Products by 50000+ Subscribers

4. You will create a custom firewall for each EC2 instance and define the strict rule.



Question : You are working with a very big IT consultancy company and you have been asked to create as max as possible IP address in a virtual
network, it can accommodate. How many, distinct ip addresses will be supported by biggest VPC in AWS.


  : You are working with a very big IT consultancy company and you have been asked to create as max as possible IP address in a virtual
1. 65531

2. 32763

3. Access Mostly Uused Products by 50000+ Subscribers

4. 100