Question : Which is true about AWS S data replication? 1. AWS replicate S3 data across the regions automatically.
2. AWS replicate S3 data across the availability zone in a Region
3. You have to create an IAM role to automatically replicate the data across the regions
4. None of the above
Correct Answer : Get Lastest Questions and Answer : Explanation: By default any object you create in S3 bucket will be copied across the availability zone in same region, but it can be accessed from anythere globally.
Cross-region replication is a bucket-level feature that enables automatic, asynchronous copying of objects across buckets in different AWS regions. To activate this feature, you add a replication configuration to your source bucket. In the configuration, you provide information such as the destination bucket where you want objects replicated to. You can request Amazon S3 to replicate all or a subset of objects with specific key name prefixes.
Once enabled, new objects uploaded to a particular S3 bucket are automatically replicated to a designated destination bucket located in a different AWS region. The replication process also copies any metadata and ACLs (Access Control Lists) associated with the object and can be enabled and managed through the S3 API.
Question : As you know, HadoopExam.com needs to provide video course access to only authorized member. How that can be best implemented in AWS, assuming videos needs to be streamed across various Geography
1. Save videos in S3 bucket. And copy the same bucket in all other regions
2. Save videos in S3 bucket and enable the cross region replication
3. Save videos in EMR
4. Use cloudfront with signed URLS
Correct Answer : Get Lastest Questions and Answer : Explanation: A signed URL includes additional information, for example, an expiration date and time, that gives you more control over access to your content. Many companies that distribute content via the Internet want to restrict access to documents, business data, media streams, or content that is intended for selected users, for example, users who have paid a fee. To securely serve this private content using CloudFront, you can do the following:
Require that your users access your private content by using special CloudFront signed URLs or signed cookies. Require that your users access your Amazon S3 content using CloudFront URLs, not Amazon S3 URLs. Requiring CloudFront URLs isn't required, but we recommend it to prevent users from bypassing the restrictions that you specify in signed URLs or signed cookies.
Question : Which of the follwing is a feature of IAM ?
A Central control of users and security credentials: You can control creation, rotation, and revocation of each users AWS security credentials (such as access keys)
B Central control of user access: You can control what data in the AWS system users can access and how they access it
C Shared AWS resources: Users can share data for collaborative projects
D Permissions based on organizational groups: You can restrict users AWS access based on their job duties (for example, admin, developer, etc.) or departments. When users move inside the organization, you can easily update their AWS access to reflect the change in their role
E Central control of AWS resources: Your organization maintains central control of the AWS data the users create, with no breaks in continuity or lost data as users move around within or leave the organization 1. A,B,C only 2. A,D,E Only 3. A, B,E Only 4. A,B,C,D Only 5. A,B,C,D,E All mentioned
Central control of users and security credentials You can control creation, rotation, and revocation of each users AWS security credentials (such as access keys) Central control of user access You can control what data in the AWS system users can access and how they access it Shared AWS resources Users can share data for
collaborative projects Permissions based on organizational groups You can restrict users AWS access based on their job duties (for example, admin, developer, etc.) or departments. When users move inside the organization, you can easily update their AWS access to reflect the change in their role Central control of AWS resources Your organization maintains central control of the AWS data the users create, with no breaks in continuity or lost data as users move around within or leave the organization Control over resource creation You can help make sure that users create AWS data only in sanctioned places Networking controls You can help make sure that users can access AWS resources only from within the organizations corporate network, using SSL Single AWS bill Your organization's AWS account gets a single AWS bill for all your users AWS activity
The SQS message retention period is configurable and can be set anywhere from 1 minute to 2 weeks. The default is 4 days and once the message retention limit is reached your messages will be automatically deleted. The option for longer message retention provides greater flexibility to allow for longer intervals between message production and consumption.
Question : You are working in a company which sales events tickets e.g. Movies, Events and Shows. Your company is entirely using AWS resources for this. You know, every weekend Friday morning details of the new shows/events/movies will booking will be opened. As soon as this happen you need many EC2 instances to serve the website traffic and processing. You have enabled the auto scaling, which is the mandatory requirement for auto scaling group configuration? A. You must have define the minimum number of EC2 instances should be configured in scaling group. B. You must enable the Health Check of all the instances C. You must define, what the desired capacity to website work in normal mode. D. You must provide the launch configuration to launch new EC2 instance.
Exp : An Amazon EC2 Windows AMI comes with an additional service installed, the Ec2Config Service. The Ec2Config service runs as a local system and performs various functions to prepare an instance when it first boots up. After the devices have been mapped with the drives, the Ec2Config service then initializes and mounts the drives. The root drive is initialized and mounted as c:\. The instance stores that comes attached to the instance are initialized and mounted as d:\, e:\, and so on. By default, when an Amazon EBS volume is attached to a Windows instance, it may show up as any drive letter on the instance. You can change the settings of the Ec2Config service to set the drive letters of the Amazon EBS volumes per your specifications.
Important
The volumes page in the Amazon EC2 console has been redesigned, and you can switch between the new and old interfaces by clicking the link in the preview message at the top of the console page. You can switch back to the old interface during the trial period; however, this topic may sometimes describe the new interface only.
Question : In case of SQS, What happens if I issue a DeleteMessage request on a previously deleted message?
Under rare circumstances you might receive a previously deleted message again. This can occur in the rare situation in which a DeleteMessage operation doesn't delete all copies of a message because one of the servers in the distributed Amazon SQS system isn't available at the time of the deletion. That message copy can then be delivered again. You should design your application so that no errors or inconsistencies occur if you receive a deleted message again.
Question : Amazon SQS uses proven cryptographic methods to authenticate your identity, through the use of your
Authentication mechanisms are provided to ensure that messages stored in Amazon SQS queues are secured against unauthorized access. Only the AWS account owners can access the queues they create.
Amazon SQS uses proven cryptographic methods to authenticate your identity, either through the use of your Access Key ID and request signature, or through the use of an X.509 certificate.
For additional security, you could build your application to encrypt messages before they are placed in a queue.
Question : You have configured, your analytics workload to be processed by Hadoop Infrastructure setup on the AWS. Based on the data volumes it auto scale and launches new AWS Spot instances. You are running entire workload in a single region. You have configured your auto scaling to have upto 100 EC2 instances. You run this entire batch processing in India night time. However, you have seen that your process is quite slow and only scale upto 20 EC2 instances. What could be the region ?
1. You can not use Spot Instances more than 20 for auto scaling. 2. There is some mis-configuration for your auto scaling, which needs to be corrected 3. Access Mostly Uused Products by 50000+ Subscribers 4. Becuase, you have capped the Bill amount per day. Which does not allow new instaces to be launched. 5. There is a limit for default number of EC2 instances per region. Which is currently 20 Ans :5 Exp : By default, there is an account limit of 20 Spot instances per region. If you terminate your Spot instance but do not cancel the request, the request counts against this limit until Amazon EC2 detects the termination and closes the request.
Spot instance limits are dynamic. When your account is new, your limit might be lower than 20 to start, but increase over time. In addition, your account might have limits on specific Spot instance types.
Auto Scaling Limits Resource Default Limit Launch configurations per region 100 Auto Scaling groups per region 20 Scaling policies per Auto Scaling group 50 Scheduled actions per Auto Scaling group 125 Lifecycle hooks per Auto Scaling group 50 SNS topics per Auto Scaling group 10 Step adjustments per scaling policy 20
Question : Your registered EC instances, with ELB can fail the health check performed by your load balancer, which of the following reason could be correct
1. Elastic Load Balancing terminates a connection if it is idle for more than 60 seconds. 2. When the load balancer performs a health check, the instance may be under significant load and may take longer than your configured timeout interval to respond 3. Access Mostly Uused Products by 50000+ Subscribers 4. SSL certificate is not correct
Ans : 5 Exp : Registered instances failing load balancer health check Your registered instances can fail the health check performed by your load balancer for one or more of the following reasons:
•Problem: Instance(s) closing the connection to the load balancer. Cause: Elastic Load Balancing terminates a connection if it is idle for more than 60 seconds. The idle connection is established when there is no read or write event taking place on both the sides of the load balancer (client to load balancer and load balancer to the back-end instance). Solution: Set the idle timeout to at least 60 seconds on your registered instances.
•Problem: Responses timing out. Cause: When the load balancer performs a health check, the instance may be under significant load and may take longer than your configured timeout interval to respond. Solution: Try adjusting the timeout on your health check settings.
•Problem: Non-200 response received. Cause: When the load balancer performs an HTTP/HTTPS health check, the instance must return a 200 HTTP code. Any other response code will be considered a failed health check. Solution: Search your application logs for responses sent to the health check requests.
•Problem: Failing public key authentication. Cause: If you are using an HTTPS or SSL load balancer with back-end authentication enabled, the public key authentication will fail if the public key on the certificate does not match the public key configured on the load balancer. Solution: Check if your SSL certificate needs to be updated. If your SSL certificate is current, try re-installing the certificate on your load balancer.
Question : During a scale storage operation, the Your DB Instance will was not available beacause it has staus as
Ans : 4 Exp : Analyzing your Database Workload on a DB Instance Using SQL Server Tuning Advisor The Database Engine Tuning Advisor is a client application provided by Microsoft that analyzes database workload and recommends an optimal set of indexes for your SQL Server databases based on the kinds of queries you run. Like SQL Server Management Studio, you run Tuning Advisor from a client computer that connects to your RDS DB Instance that is running SQL Server. The client computer can be a local computer that you run on premises within your own network or it can be an Amazon EC2 Windows instance that is running in the same region as your RDS DB Instance.
This section shows how to capture a workload for Tuning Advisor to analyze. This is the preferred process for capturing a workload because RDS restricts host access to the SQL Server instance. The full documentation on Tuning Advisor can be found on MSDN.
To use Tuning Advisor, you must provide what is called a workload to the advisor. A workload is a set of Transact-SQL statements that execute against a database or databases that you want to tune. Database Engine Tuning Advisor uses trace files, trace tables, Transact-SQL scripts, or XML files as workload input when tuning databases. When working with RDS, a workload can be a file on a client computer or a database table on an RDS SQL Server DB accessible to your client computer. The file or the table must contain queries against the databases you want to tune in a format suitable for replay.
For Tuning Advisor to be most effective, a workload should be as realistic as possible. You can generate a workload file or table by performing a trace against your DB Instance. While a trace is running, you can either simulate a load on your DB Instance or run your applications with a normal load.
There are two types of traces: client-side and server-side. A client-side trace is easier to set up and you can watch trace events being captured in real-time in SQL Server Profiler. A server-side trace is more complex to set up and requires some Transact-SQL scripting. In addition, because the trace is written to a file on the RDS DB Instance, storage space is consumed by the trace. It is important to track of how much storage space a running server-side trace uses because the DB Instance could enter a storage-full state and would no longer be available if it runs out of storage space.
For a client-side trace, when a sufficient amount of trace data has been captured in the SQL Server Profiler, you can then generate the workload file by saving the trace to either a file on your local computer or in a database table on an DB Instance that is available to your client computer. The main disadvantage of using a client-side trace is that the trace may not capture all queries when under heavy loads. This could weaken the effectiveness of the analysis performed by the Database Engine Tuning Advisor. If you need to run a trace under heavy loads and you want to ensure that it captures every query during a trace session, you should use a server-side trace.
For a server-side trace, you must get the trace files on the DB Instance into a suitable workload file or you can save the trace to a table on the DB Instance after the trace completes. You can use the SQL Server Profiler to save the trace to a file on your local computer or have the Tuning Advisor read from the trace table on the DB Instance.
Question : In VPC, A NAT instance must be able to send and receive traffic when the source or destination is not itself. Therefore
1. Source/Destination check should be enable, which is desabled by default. 2. Source/Destination check should be disabled. 3. Access Mostly Uused Products by 50000+ Subscribers 4. NAT instance should have public ip address configured. Ans : 2 Exp : Each EC2 instance performs source/destination checks by default. This means that the instance must be the source or destination of any traffic it sends or receives. However, a NAT instance must be able to send and receive traffic when the source or destination is not itself. Therefore, you must disable source/destination checks on the NAT instance.
You can disable the SrcDestCheck attribute for a NAT instance that's either running or stopped using the console or the command line.
Question : For VPC instance , select statement which applies correctly
1. can change the tenancy of an instance any time after you launch it. 2. can't change the tenancy of an instance after you launch it. 3. Access Mostly Uused Products by 50000+ Subscribers 4. None of the above Ans : 2 Exp : Dedicated Instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that's dedicated to a single customer. Your Dedicated Instances are physically isolated at the host hardware level from your instances that aren't Dedicated Instances and from instances that belong to other AWS accounts.
Each instance that you launch into a VPC has a tenancy attribute. You can't change the tenancy of an instance after you launch it. This attribute has the following values.
Value Default Description Your instance runs on shared hardware.
Value dedicated Description Your instance runs on single-tenant hardware.
Each VPC has a related instance tenancy attribute. You can't change the instance tenancy of a VPC after you create it. This attribute has the following values.
Value default Description An instance launched into the VPC is a Dedicated Instance if the tenancy attribute for the instance is dedicated.
Value dedicated Description All instances launched into the VPC are Dedicated Instances, regardless of the value of the tenancy attribute for the instance.
If you are planning to use Dedicated Instances, you can implement them using either method: •Create the VPC with the instance tenancy set to dedicated (all instances launched into this VPC are Dedicated Instances). •Create the VPC with the instance tenancy set to default, and specify dedicated tenancy for any instances that should be Dedicated Instances when you launch them
Question : A company has configured and peered two VPCs: VPC- and VPC-. VPC- contains only private subnets, and VPC-2 contains only public subnets. The company uses a single AWS Direct Connect connection and private virtual interface to connect their on-premises network with VPC-1. Which two methods increases the fault tolerance of the connection to VPC-1? Choose 2 answers A. Establish a hardware VPN over the internet between VPC-2 ana the on-premises network. B. Establish a hardware VPN over the internet between VPC-1 and the on-premises network. C. Establish a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2. D. Establish a new AWS Direct Connect connection and private virtual interface in a different AWS region than VPC-1. E. Establish a new AWS Direct Connect connection and private virtual interface in the same AWS region as VPC-1
Question : You are working in a Big Finance company, where it is mandatory that user login should have based on temporary tokens and authentication will be based in user information stored in LDAP. Which of the following satisfies the given requirement? A. EC2 Roles B. MFA C. Root User D. Federation E. Security Group
Amazon CloudWatch provides monitoring for AWS cloud resources and the applications customers run on AWS. Developers and system administrators can use it to collect and track metrics, gain insight, and react immediately to keep their applications and businesses running smoothly. Amazon CloudWatch monitors AWS resources such as Amazon EC2 and Amazon RDS DB instances, and can also monitor custom metrics generated by a customers applications and services. With Amazon CloudWatch, you gain system-wide visibility into resource utilization, application performance, and operational health.
Amazon CloudWatch provides a reliable, scalable, and flexible monitoring solution that you can start using within minutes. You no longer need to set up, manage, or scale your own monitoring systems and infrastructure. Using Amazon CloudWatch, you can easily monitor as much or as little metric data as you need. Amazon CloudWatch lets you programmatically retrieve your monitoring data, view graphs, and set alarms to help you troubleshoot, spot trends, and take automated action based on the state of your cloud environment.
The user can set an alarm on all the CloudWatch metrics, such as the EC2 CPU utilization or the Auto Scaling group metrics. CloudWatch does not support AWS S3. Thus. it cannot set an alarm on the RRS lost objects.
Question : . You have been asked by the Chief architect to implement tighter security control on each EC instance, because each EC instance does very critical data processing and if it leaked, it can have very huge damage. He suggested it is optional, if you want to keep any security on subnet level. Which of the following you will implement in this case mandatorily?