Premium

AWS Certified SysOps Administrator - Associate Questions and Answers (Dumps and Practice Questions)



Question : What is the below cloudwatch command mean

Prompt>aws cloudwatch put-metric-alarm --alarm-name ebs-mon --alarm-description "Alarm when EBS volume exceeds 100MB throughput"
--metric-name VolumeReadBytes --namespace AWS/EBS --statistic Average --period 300 --threshold 100000000 --comparison-operator GreaterThanThreshold
--dimensions Name=VolumeId,Value=my-volume-id --evaluation-periods 3 --alarm-actions arn:aws:sns:us-east-1:1234567890:my-alarm-topic
--insufficient-data-actions arn:aws:sns:us-east-1:1234567890:my-insufficient-data-topic

  : What is the below cloudwatch command mean
1. Command has syntex error.
2. To send an Amazon Simple Notification Service email message when EBS exceeds 100 MB throughput
3. Access Mostly Uused Products by 50000+ Subscribers

Correct Answer : Get Lastest Questions and Answer :
To send an Amazon Simple Notification Service email message when EBS exceeds 100 MB throughput
1. Create an Amazon Simple Notification Service topic. See instructions for creating an Amazon SNS topic in Set Up Amazon Simple Notification Service.
2. Create the alarm.
Prompt>aws cloudwatch put-metric-alarm --alarm-name ebs-mon --alarm-description "Alarm when EBS volume exceeds 100MB throughput" --metric-name VolumeReadBytes --namespace AWS/EBS
--statistic Average --period 300 --threshold 100000000 --comparison-operator GreaterThanThreshold --dimensions Name=VolumeId,Value=my-volume-id --evaluation-periods 3 --alarm-actions
arn:aws:sns:us-east-1:1234567890:my-alarm-topic --insufficient-data-actions arn:aws:sns:us-east-1:1234567890:my-insufficient-data-topic
The AWS CLI returns to the command prompt if the command succeeds.
3. Access Mostly Uused Products by 50000+ Subscribers
" Force an alarm state change to ALARM.
" Prompt>aws cloudwatch set-alarm-state --alarm-name lb-mon --state-reason "initializing" --state-value OK
" Prompt>aws cloudwatch set-alarm-state --alarm-name lb-mon --state-reason "initializing" --state-value ALARM
Prompt>aws cloudwatch set-alarm-state --alarm-name lb-mon --state-reason "initializing" --state-value INSUFFICIENT_DATA
" Check that two emails have been received.







Question :
Correct use cases of the Amazon Glacier is :

  :
1. Offsite Enterprise Information Archiving
2. Archiving Media Assets
3. Access Mostly Uused Products by 50000+ Subscribers
4. Magnetic Tape Replacement
5. All of the above


Correct Answer : Get Lastest Questions and Answer
:

Explanation: Common Use Cases
Amazon Glacier can be used to support a wide variety of use cases, for example:
Offsite Enterprise Information Archiving
Organizations are archiving more and more data, driven by business and regulatory needs and the increasing amount of data they produce. Examples include email, legal records, and
financial and business documents. This data is often retained for years or decades, but is accessed infrequently. Amazon Glacier allows you to cost effectively and securely store
enterprise data offsite, making it simple, inexpensive and safe to retain archived data for as long as desired. The services extremely low storage cost enables you to retain data
that may be of future value, but that otherwise may have been discarded in order to reduce costs or to make room for additional data. Businesses and organizations of any size can use
Amazon Glacier to reduce their storage costs and free up their primary storage infrastructure.
Archiving Media Assets
Media companies core assets are their content which includes books, movies, music, images, news footage, and TV shows. The number and size of these assets continues to grow, driven
by new production and new technologies such as high definition TV, social media and 3D video. These assets can grow to tens or hundreds of petabytes. Safely and securely storing

these assets is of critical importance. Data accessibility is also critical. For example, certain archival news footage can suddenly become valuable based on unfolding events.
Archiving media has traditionally required costly, multi site, redundant data centers and offsite vaulting. Amazon Glacier reduces the cost of storing these assets while
simultaneously increasing the durability, ease of use, and accessibility of the content. Accessing media files in Amazon Glacier is as simple as making calls to the services APIs.
Customers dont need to worry about transporting storage media from offsite facilities in order to restore data.
Archiving Research and Scientific Data
Research and scientific organizations, such as pharmaceutical and bio tech companies, as well as universities and research institutes, have large data archiving needs. An example use
case is drug development, where a substantial amount of data is generated and must be retained so researchers can verify experimental drug test results. Traditionally, this data has
been stored on inflexible tape based storage systems with copies stored in multiple sites and often with a copy vaulted offsite as well. Amazon Glacier reduces the cost of storing
these data sets by eliminating the operational overhead involved in managing hardware and data centers. The service automatically stores redundant data in multiple facilities and on
multiple devices within each facility and is built to be automatically self healing, performing regular, systematic data integrity checks and using redundant data to perform
automatic repairs if errors are discovered.
Digital Preservation
Digital preservationists in organizations such as libraries, historical societies, non profit organizations and governments are increasing their efforts to preserve valuable but
aging digital content such as websites, software source code, video games, user generated content and other digital artifacts that are no longer readily available. These archive
volumes may start small, but can grow to petabytes over time. Amazon Glacier makes highly durable, cost effective storage accessible for data volumes of any size. This contrasts with
traditional data archiving solutions that require large scale and accurate capacity planning in order to be cost effective.
Magnetic Tape Replacement
Amazon Glacier can replace on premise or offsite tape libraries. Although magnetic tape based storage can be cost effective when operated at scale, it can be a drain on resources as
one (or more) tape libraries need to be maintained (often in geographically distinct locations) requiring specialized personnel, and taking up valuable space in data centers. In
addition, the tapes themselves must be carefully stored and managed, which can include periodically copying data from old tapes onto new ones to ensure that your data can still be
read as tape technology standards evolve. Replacing your tape library with Amazon Glacier removes the burden of managing these operational challenges. Entire data sets can be moved
from tape libraries into Amazon Glacier, a process that can be economically accelerated by using AWS Import / Export.

Tapes low cost potential also requires accurate capacity planning, a process that is usually error prone, especially when storage growth is unpredictable, as it often is. Over
provisioning capacity can result in under utilization and higher costs, while under provisioning can trigger expensive hardware upgrades far earlier than planned. Even when capacity
planning is accurate, periodic hardware upgrades are still common as older tape libraries are less efficient and therefore costlier to operate. You can avoid investing in tape
library upgrades, whether driven by capacity constraints or a technology refresh, and instead simply start storing data in Amazon Glacier. In doing so, you avoid the need for large
upfront capital and expensive multi year support commitments. With Amazon Glacier, you pay only for the capacity you use eliminating the need for capacity planning.




Question :
Your web application front end consists of multiple EC2 instances behind an Elastic Load Balancer.
You configured ELb to perform health checks on these EC2 instances. If an instance fails to pass health checks, which statement will be true?

  :
1. The instance is replace automatically by the ELB.
2. The instance gets terminated automatically by the ELB.
3. Access Mostly Uused Products by 50000+ Subscribers
4. The instance gets quarantined by the ELB for root cause analyis


Correct Answer : Get Lastest Questions and Answer :

Explanation: You can build fault tolerant applications by placing your Amazon EC2 instances in multiple Availability Zones. To achieve even more fault tolerance with less manual
intervention, you can use Elastic Load Balancing. You get improved fault tolerance by placing your compute instances behind an Elastic Load Balancer, as it can automatically balance
traffic across multiple instances and multiple Availability Zones and ensure that only healthy Amazon EC2 instances receive traffic. You can setup an Elastic Load Balancer to load
balance incoming application traffic across Amazon EC2 instances in a single Availability Zone or multiple Availability Zones. Elastic Load Balancing can detect the health of Amazon
EC2 instances. When it detects unhealthy Amazon EC2 instances, it no longer routes traffic to those unhealthy Amazon EC2 instances. Instead, it spreads the load across the remaining
healthy Amazon EC2 instances. If all of your Amazon EC2 instances in a particular Availability Zone are unhealthy, but you have set up Amazon EC2 instances in multiple Availability
Zones, Elastic Load Balancing will route traffic to your healthy Amazon EC2 instances in those other zones. It will resume load balancing to the original Amazon EC2 instances when
they have been restored to a healthy state.



Related Questions


Question : What are characteristics of Amazon S?
Choose 2 answers
A. Objects are directly accessible via a URL
B. S3 should be used to host a relational database
C. S3 allows you to store objects or virtually unlimited size
D. S3 allows you to store virtually unlimited amounts of data
E. S3 offers Provisioned IOPS

  : What are characteristics of Amazon S?
1. A,C
2. C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,D
5. A,E



Question : You receive a frantic call from a new DBA who accidentally dropped a table containing all your
customers.
Which Amazon RDS feature will allow you to reliably restore your database to within 5 minutes of
when the mistake was made?
  : You receive a frantic call from a new DBA who accidentally dropped a table containing all your
1. Multi-AZ RDS
2. RDS snapshots
3. Access Mostly Uused Products by 50000+ Subscribers
4. RDS automated backup




Question : A media company produces new video files on-premises every day with a total size of around
100GBS after compression All files have a size of 1 -2 GB and need to be uploaded to Amazon S3
every night in a fixed time window between 3am and 5am Current upload takes almost 3 hours,
although less than half of the available bandwidth is used.
What step(s) would ensure that the file uploads are able to complete in the allotted time window?
  :  A media company produces new video files on-premises every day with a total size of around
1. Increase your network bandwidth to provide faster throughput to S3
2. Upload the files in parallel to S3
3. Access Mostly Uused Products by 50000+ Subscribers

4. Use AWS Import/Export to transfer the video files




Question : You are running a web-application on AWS consisting of the following components an Elastic
Load Balancer (ELB) an Auto-Scaling Group of EC2 instances running Linux/PHP/Apache, and
Relational DataBase Service (RDS) MySQL.
Which security measures fall into AWS's responsibility?

  : You are running a web-application on AWS consisting of the following components an Elastic
1. Protect the EC2 instances against unsolicited access by enforcing the principle of leastprivilege access
2. Protect against IP spoofing or packet sniffing
3. Access Mostly Uused Products by 50000+ Subscribers

4. Install latest security patches on ELB. RDS and EC2 instances



Question : You use S to store critical data for your company Several users within your group currently have
full permissions to your S3 buckets You need to come up with a solution that does not impact your
users and also protect against the accidental deletion of objects.
Which two options will address this issue?
Choose 2 answers
A. Enable versioning on your S3 Buckets
B. Configure your S3 Buckets with MFA delete
C. Create a Bucket policy and only allow read only permissions to all users at the bucket level
D. Enable object life cycle policies and configure the data older than 3 months to be archived in Glacier
  : You use S to store critical data for your company Several users within your group currently have
1. A,C
2. C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,D
5. A,B




Question : An organization's security policy requires multiple copies of all critical data to be replicated across
at least a primary and backup data center. The organization has decided to store some criticaldata on Amazon S3.
Which option should you implement to ensure this requirement is met?
  : An organization's security policy requires multiple copies of all critical data to be replicated across
1. Use the S3 copy API to replicate data between two S3 buckets in different regions
2. You do not need to implement anything since S3 data is automatically replicated between regions
3. Access Mostly Uused Products by 50000+ Subscribers
4. You do not need to implement anything since S3 data is automatically replicated between multiple facilities within an AWS Region