Question : You are working in a media company for their website deployment and you have deployed this in AWS. You also want to enable auto scaling so that whenever the load is high on the website it can scale and handle the load. Which of the following is a mandatory parameters for launch configuration? A. Launch Configuration Name B. AMI C. Instance Type D. Security Group E. EBS F. S3 Bucket 1. A,B,C 2. B,C,D 3. Access Mostly Uused Products by 50000+ Subscribers 4. D,E,F 5. A,C,E
Correct Answer : Get Lastest Questions and Answer : Explanation: To have properly configured AWS auto scaling launch configuration, you should have defined this three thing mandatorily. - Launch Configuration Name - AMI - Instance Type
Question : What is the maximum time can I keep my messages in Amazon SQS queues?
The SQS message retention period is configurable and can be set anywhere from 1 minute to 2 weeks. The default is 4 days and once the message retention limit is reached your messages will be automatically deleted. The option for longer message retention provides greater flexibility to allow for longer intervals between message production and consumption.
Question : You are working in a company which sales events tickets e.g. Movies, Events and Shows. Your company is entirely using AWS resources for this. You know, every weekend Friday morning details of the new shows/events/movies will booking will be opened. As soon as this happen you need many EC2 instances to serve the website traffic and processing. You have enabled the auto scaling, which is the mandatory requirement for auto scaling group configuration? A. You must have define the minimum number of EC2 instances should be configured in scaling group. B. You must enable the Health Check of all the instances C. You must define, what the desired capacity to website work in normal mode. D. You must provide the launch configuration to launch new EC2 instance.
Correct Answer : Get Lastest Questions and Answer : Explanation: When you define new auto scaling group then following two parameters are mandatory in addition to AMI. - Minimum Number of EC2 instances in Auto scaling group - You must provide the launch configuration. Desired capacity is an optional parameter.
Question : Which of the following service runs as a local system and performs various functions to prepare an instance when it first boots up?
Exp : An Amazon EC2 Windows AMI comes with an additional service installed, the Ec2Config Service. The Ec2Config service runs as a local system and performs various functions to prepare an instance when it first boots up. After the devices have been mapped with the drives, the Ec2Config service then initializes and mounts the drives. The root drive is initialized and mounted as c:\. The instance stores that comes attached to the instance are initialized and mounted as d:\, e:\, and so on. By default, when an Amazon EBS volume is attached to a Windows instance, it may show up as any drive letter on the instance. You can change the settings of the Ec2Config service to set the drive letters of the Amazon EBS volumes per your specifications.
Important
The volumes page in the Amazon EC2 console has been redesigned, and you can switch between the new and old interfaces by clicking the link in the preview message at the top of the console page. You can switch back to the old interface during the trial period; however, this topic may sometimes describe the new interface only.
Question : In case of SQS, What happens if I issue a DeleteMessage request on a previously deleted message?
Under rare circumstances you might receive a previously deleted message again. This can occur in the rare situation in which a DeleteMessage operation doesn't delete all copies of a message because one of the servers in the distributed Amazon SQS system isn't available at the time of the deletion. That message copy can then be delivered again. You should design your application so that no errors or inconsistencies occur if you receive a deleted message again.
Question : Amazon SQS uses proven cryptographic methods to authenticate your identity, through the use of your
Authentication mechanisms are provided to ensure that messages stored in Amazon SQS queues are secured against unauthorized access. Only the AWS account owners can access the queues they create.
Amazon SQS uses proven cryptographic methods to authenticate your identity, either through the use of your Access Key ID and request signature, or through the use of an X.509 certificate.
For additional security, you could build your application to encrypt messages before they are placed in a queue.
Question : You have configured, your analytics workload to be processed by Hadoop Infrastructure setup on the AWS. Based on the data volumes it auto scale and launches new AWS Spot instances. You are running entire workload in a single region. You have configured your auto scaling to have upto 100 EC2 instances. You run this entire batch processing in India night time. However, you have seen that your process is quite slow and only scale upto 20 EC2 instances. What could be the region ?
1. You can not use Spot Instances more than 20 for auto scaling. 2. There is some mis-configuration for your auto scaling, which needs to be corrected 3. Access Mostly Uused Products by 50000+ Subscribers 4. Becuase, you have capped the Bill amount per day. Which does not allow new instaces to be launched. 5. There is a limit for default number of EC2 instances per region. Which is currently 20 Ans :5 Exp : By default, there is an account limit of 20 Spot instances per region. If you terminate your Spot instance but do not cancel the request, the request counts against this limit until Amazon EC2 detects the termination and closes the request.
Spot instance limits are dynamic. When your account is new, your limit might be lower than 20 to start, but increase over time. In addition, your account might have limits on specific Spot instance types.
Auto Scaling Limits Resource Default Limit Launch configurations per region 100 Auto Scaling groups per region 20 Scaling policies per Auto Scaling group 50 Scheduled actions per Auto Scaling group 125 Lifecycle hooks per Auto Scaling group 50 SNS topics per Auto Scaling group 10 Step adjustments per scaling policy 20
Question : Your registered EC instances, with ELB can fail the health check performed by your load balancer, which of the following reason could be correct
1. Elastic Load Balancing terminates a connection if it is idle for more than 60 seconds. 2. When the load balancer performs a health check, the instance may be under significant load and may take longer than your configured timeout interval to respond 3. Access Mostly Uused Products by 50000+ Subscribers 4. SSL certificate is not correct
Ans : 5 Exp : Registered instances failing load balancer health check Your registered instances can fail the health check performed by your load balancer for one or more of the following reasons:
•Problem: Instance(s) closing the connection to the load balancer. Cause: Elastic Load Balancing terminates a connection if it is idle for more than 60 seconds. The idle connection is established when there is no read or write event taking place on both the sides of the load balancer (client to load balancer and load balancer to the back-end instance). Solution: Set the idle timeout to at least 60 seconds on your registered instances.
•Problem: Responses timing out. Cause: When the load balancer performs a health check, the instance may be under significant load and may take longer than your configured timeout interval to respond. Solution: Try adjusting the timeout on your health check settings.
•Problem: Non-200 response received. Cause: When the load balancer performs an HTTP/HTTPS health check, the instance must return a 200 HTTP code. Any other response code will be considered a failed health check. Solution: Search your application logs for responses sent to the health check requests.
•Problem: Failing public key authentication. Cause: If you are using an HTTPS or SSL load balancer with back-end authentication enabled, the public key authentication will fail if the public key on the certificate does not match the public key configured on the load balancer. Solution: Check if your SSL certificate needs to be updated. If your SSL certificate is current, try re-installing the certificate on your load balancer.
Question : During a scale storage operation, the Your DB Instance will was not available beacause it has staus as
Ans : 4 Exp : Analyzing your Database Workload on a DB Instance Using SQL Server Tuning Advisor The Database Engine Tuning Advisor is a client application provided by Microsoft that analyzes database workload and recommends an optimal set of indexes for your SQL Server databases based on the kinds of queries you run. Like SQL Server Management Studio, you run Tuning Advisor from a client computer that connects to your RDS DB Instance that is running SQL Server. The client computer can be a local computer that you run on premises within your own network or it can be an Amazon EC2 Windows instance that is running in the same region as your RDS DB Instance.
This section shows how to capture a workload for Tuning Advisor to analyze. This is the preferred process for capturing a workload because RDS restricts host access to the SQL Server instance. The full documentation on Tuning Advisor can be found on MSDN.
To use Tuning Advisor, you must provide what is called a workload to the advisor. A workload is a set of Transact-SQL statements that execute against a database or databases that you want to tune. Database Engine Tuning Advisor uses trace files, trace tables, Transact-SQL scripts, or XML files as workload input when tuning databases. When working with RDS, a workload can be a file on a client computer or a database table on an RDS SQL Server DB accessible to your client computer. The file or the table must contain queries against the databases you want to tune in a format suitable for replay.
For Tuning Advisor to be most effective, a workload should be as realistic as possible. You can generate a workload file or table by performing a trace against your DB Instance. While a trace is running, you can either simulate a load on your DB Instance or run your applications with a normal load.
There are two types of traces: client-side and server-side. A client-side trace is easier to set up and you can watch trace events being captured in real-time in SQL Server Profiler. A server-side trace is more complex to set up and requires some Transact-SQL scripting. In addition, because the trace is written to a file on the RDS DB Instance, storage space is consumed by the trace. It is important to track of how much storage space a running server-side trace uses because the DB Instance could enter a storage-full state and would no longer be available if it runs out of storage space.
For a client-side trace, when a sufficient amount of trace data has been captured in the SQL Server Profiler, you can then generate the workload file by saving the trace to either a file on your local computer or in a database table on an DB Instance that is available to your client computer. The main disadvantage of using a client-side trace is that the trace may not capture all queries when under heavy loads. This could weaken the effectiveness of the analysis performed by the Database Engine Tuning Advisor. If you need to run a trace under heavy loads and you want to ensure that it captures every query during a trace session, you should use a server-side trace.
For a server-side trace, you must get the trace files on the DB Instance into a suitable workload file or you can save the trace to a table on the DB Instance after the trace completes. You can use the SQL Server Profiler to save the trace to a file on your local computer or have the Tuning Advisor read from the trace table on the DB Instance.
Question : In VPC, A NAT instance must be able to send and receive traffic when the source or destination is not itself. Therefore
1. Source/Destination check should be enable, which is desabled by default. 2. Source/Destination check should be disabled. 3. Access Mostly Uused Products by 50000+ Subscribers 4. NAT instance should have public ip address configured. Ans : 2 Exp : Each EC2 instance performs source/destination checks by default. This means that the instance must be the source or destination of any traffic it sends or receives. However, a NAT instance must be able to send and receive traffic when the source or destination is not itself. Therefore, you must disable source/destination checks on the NAT instance.
You can disable the SrcDestCheck attribute for a NAT instance that's either running or stopped using the console or the command line.
Question : For VPC instance , select statement which applies correctly
1. can change the tenancy of an instance any time after you launch it. 2. can't change the tenancy of an instance after you launch it. 3. Access Mostly Uused Products by 50000+ Subscribers 4. None of the above Ans : 2 Exp : Dedicated Instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that's dedicated to a single customer. Your Dedicated Instances are physically isolated at the host hardware level from your instances that aren't Dedicated Instances and from instances that belong to other AWS accounts.
Each instance that you launch into a VPC has a tenancy attribute. You can't change the tenancy of an instance after you launch it. This attribute has the following values.
Value Default Description Your instance runs on shared hardware.
Value dedicated Description Your instance runs on single-tenant hardware.
Each VPC has a related instance tenancy attribute. You can't change the instance tenancy of a VPC after you create it. This attribute has the following values.
Value default Description An instance launched into the VPC is a Dedicated Instance if the tenancy attribute for the instance is dedicated.
Value dedicated Description All instances launched into the VPC are Dedicated Instances, regardless of the value of the tenancy attribute for the instance.
If you are planning to use Dedicated Instances, you can implement them using either method: •Create the VPC with the instance tenancy set to dedicated (all instances launched into this VPC are Dedicated Instances). •Create the VPC with the instance tenancy set to default, and specify dedicated tenancy for any instances that should be Dedicated Instances when you launch them
Question : A company has configured and peered two VPCs: VPC- and VPC-. VPC- contains only private subnets, and VPC-2 contains only public subnets. The company uses a single AWS Direct Connect connection and private virtual interface to connect their on-premises network with VPC-1. Which two methods increases the fault tolerance of the connection to VPC-1? Choose 2 answers A. Establish a hardware VPN over the internet between VPC-2 ana the on-premises network. B. Establish a hardware VPN over the internet between VPC-1 and the on-premises network. C. Establish a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2. D. Establish a new AWS Direct Connect connection and private virtual interface in a different AWS region than VPC-1. E. Establish a new AWS Direct Connect connection and private virtual interface in the same AWS region as VPC-1
Question : You are working in a Big Finance company, where it is mandatory that user login should have based on temporary tokens and authentication will be based in user information stored in LDAP. Which of the following satisfies the given requirement? A. EC2 Roles B. MFA C. Root User D. Federation E. Security Group
Correct Answer : Get Lastest Questions and Answer : Explanation: Both EC2 Role and Federation depend on the tokens. When EC2 role is created a temporary token will be created and assigned to the Principal or application to access AWS resources based on policy defined. Token expiry dates varies from 15 Mins to 36 Hours. Similarly using LDAP for authentication required the Federation to Principal get authenticated and once he is authenticated temporary token will be provided to access AWS resources.
1. Amazon S3 has security problem with Cookies. 2. CloudFront cannot log cookies when you use Amazon S3 3. Access Mostly Uused Products by 50000+ Subscribers 4. CloudFront cannot find Amazon S3 when Cookie logging is On.
1. Rejects the request as there cannot be a separate dimension for a single metric 2. Create a separate metric for each call 3. Access Mostly Uused Products by 50000+ Subscribers 4. Create a separate metric, but overwrites the previous dimension data with the new dimension data