Question : You are configuring ELB. Which of the below mentioned options allows the user to route traffic to all instances irrespective of the AZ instance count? 1. Cross zone load balancing 2. Across zone load balancing 3. Access Mostly Uused Products by 50000+ Subscribers 4. Round Robin
Explanation: Elastic Load Balancing provides the option to either enable or disable cross-zone load balancing for the load balancer. With cross-zone load balancing, the load balancer nodes route traffic to the back-end instances across all the Availability Zones.
Question : An ELB has instances registered with it. instances are running in one AZ, while each are running in two separate AZs. By default, when a user request arrives how will ELB distribute the load?
1. The AZ with a higher instance will have more requests than others 2. Distributing requests across all instances equally 3. Access Mostly Uused Products by 50000+ Subscribers 4. The new request will go to the higher instance count AZ while the old requests will go to AZs with a lower number of instances
Explanation: If the EC2 instances count is imbalanced across the AZ, the load balancer begins to route traffic equally amongst all the enabled Availability Zones irrespective of the instance count in each zone. If the user wants to distribute traffic equally amongst all the instances, the user needs to enable cross zone load balancing.
Question : You have instances registered with ELB. One of the instances is being deregistered by Auto Scaling. How can you configure the deregistered instance so that it does not receive new requests? 1. Enable session deregistration with ELB 2. Remove session stickiness of ELB 3. Access Mostly Uused Products by 50000+ Subscribers 4. Enabled connection drainage with ELB
Explanation: Connection draining causes the ELB load balancer to stop sending new requests to a deregistered instance or an unhealthy instance, while keeping the existing connections open. This allows the load balancer to complete the in-flight requests made to the deregistered or unhealthy instances.
1. Amazon ServiceBus 2. Amazon EMR 3. Access Mostly Uused Products by 50000+ Subscribers 4. Amazon VPC Ans: 4 Exp : Amazon VPC lets you provision a logically isolated section of the Amazon Web Services (AWS) Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. You can also create a Hardware Virtual Private Network (VPN) connection between your corporate datacenter and your VPC and leverage the AWS cloud as an extension of your corporate datacenter.
You can easily customize the network configuration for your Amazon VPC. For example, you can create a public-facing subnet for your web servers that have access to the Internet, and place your backend systems such as databases or application servers in a private-facing subnet with no Internet access. You can leverage multiple layers of security, including security groups and network access control lists, to help control access to Amazon EC2 instances in each subnet.
Question :
In regard to CloudFormation, In the Conditions section you can reference _________.
1. The logical ID of a resource and other conditions and values from the Parameters and Mappings sections of a template. 2. The logical ID of a resource in a condition. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Neither the Logical ID nor other conditions and values from the Parameters and Mappings sections of a template Ans :3 Exp : All conditions are defined in the Conditions section of a template. You use intrinsic functions to define a condition
The CreateProdInstance condition evaluates to true if the EnvType parameter is equal to prod. The EnvType parameter is an input parameter that you specify when you create or update a stack.
Note
In the Conditions section, you can only reference other conditions and values from the Parameters and Mappings sections of a template. For example, you cannot reference the logical ID of a resource in a condition, but you can reference a value from an input parameter.
To use the condition, you reference it in the Resources section of a template, associating it with a specific resource. After you do, the resource will be created whenever the condition evaluates to true
Question :
If you use an AWS SDK, does the SDK handles the signing process of your REST/Query requests for you?
1. AWS SDK doesnt need Signing process 2. No 3. Access Mostly Uused Products by 50000+ Subscribers Ans : 3 Exp : here are two ways you can programmatically call the functionality exposed by an Amazon Web Services (AWS) API: submit a REST/Query request over HTTP/HTTPS, or call wrapper functions in one of the AWS SDKs. This guide describes how to sign your REST/Query requests. If you use an AWS SDK, the SDK handles the signing process for you.
REST/Query Requests
REST or Query requests are HTTP or HTTPS requests that use an HTTP verb (such as GET or POST) and a parameter named Action or Operation that specifies the API you are calling. Calling an API using a REST or Query request is the most direct way to access a web service, but requires that your application handle low-level details such as generating the hash to sign the request, and error handling. The benefit of using a REST or Query request is that you have access to the complete functionality of an API.
Note
Some AWS products, such as Amazon S3 and Amazon Route 53, provide a REST API. Other AWS products, such as Amazon EC2, provide a Query API that is similar to REST, but does not adhere completely to REST principles.
Question :
In context of VPC, select the correct statement
1. VPC includes a default security group whose initial rules are to allow all inbound traffic. You can't delete this group 2. VPC includes a default security group whose initial rules are to deny all inbound traffic and you can delete this group. 3. Access Mostly Uused Products by 50000+ Subscribers 4. VPC includes a default security group whose initial rules are to allow all inbound traffic and you can delete this group. Ans : 3 Exp : VPC includes a default security group whose initial rules are to deny all inbound traffic, allow all outbound traffic, and allow all traffic between instances in the group. You can't delete this group; however, you can change the group's rules. The procedure is the same as modifying any other security group.
Question : Will you be able to access snapshots using the regular Amazon S APIs? 1. Yes, all snapshots are stored in S3 and you can access to it using S3 APIs 2. No, snapshots are only available through the Amazon EBS APIs. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Yes, if you have chosen the snapshot to be stored in S3 and you can access to it using S3 APIs Ans : 3
Exp : No, snapshots are only available through the Amazon EC2 APIs.
Question :
In VPC, subnets that aren't explicitly associated with any route table have an ______ association with the main route table.
1. Implicit as well as explicit 2. Explicit 3. Access Mostly Uused Products by 50000+ Subscribers 4. FileDirectory 5. None of these Ans : 3 Exp : When you create a VPC, it automatically has a main route table. Initially, the main route table (and every route table in a VPC) contains only a single route: a local route that enables communication within the VPC.
You cant modify the local route in a route table. Whenever you launch an instance in the VPC, the local route automatically covers that instance; you dont need to add the new instance to a route table.
If you dont explicitly associate a subnet with a route table, the subnet is implicitly associated with the main route table. However, you can still explicitly associate a subnet with the main route table. You might do that if you change which table is the main route table (see Replacing the Main Route Table).
The console shows the number of subnets associated with each table. Only explicit associations are included in that number When you add a gateway to a VPC (either an Internet gateway or a virtual private gateway), you must update the route table for any subnet that uses that gateway.
Question :
__________ and Puppet can be used together to automate your entire deployment and management processes, from your AWS resources through to your application artifacts.
1. Amazon Glacier 2. AWS CloudFormation 3. Access Mostly Uused Products by 50000+ Subscribers 4. AWS Elastic Beanstalk Ans : 2 Exp : AWS CloudFormation gives you an easy way to create the set of resources such as Amazon EC2 instance, Amazon RDS database instances and Elastic Load Balancers needed to run your application. The template describes what resources you need and AWS CloudFormation takes care of how: provisioning the resources in an orderly and predictable fashion, handling and recovering from any failures or issues.
AWS CloudFormation can help you to configure and/or install your application as well as how to bootstrap deployment and management tools that you may already use in your environment. Puppet is an open source platform for provisioning, configuring and patching applications and operating system components. AWS CloudFormation and Puppet can be used together to automate your entire deployment and management processes, from your AWS resources through to your application artifacts.
Question : How many subnets can be created within a single VPC?
Question : Which is the wrong statement regarding "Security Group" in VPC
1. Operates at the instance level (first layer of defense) 2. Supports allow rules only 3. Access Mostly Uused Products by 50000+ Subscribers 4. Is stateless: Return traffic must be explicitly allowed by rules 5. It evaluate all rules before deciding whether to allow traffic
Ans : 4 Exp :The following table summarizes the basic differences between security groups and network ACLs.
Security Group Operates at the instance level (first layer of defense) Supports allow rules only Is stateful: Return traffic is automatically allowed, regardless of any rules We evaluate all rules before deciding whether to allow traffic Applies to an instance only if someone specifies the security group when launching the instance, or associates the security group with the instance later on
Network ACL Operates at the subnet level (second layer of defense) Supports allow rules and deny rules Is stateless: Return traffic must be explicitly allowed by rules We process rules in number order when deciding whether to allow traffic Automatically applies to all instances in the subnets it's associated with (backup layer of defense, so you don't have to rely on someone specifying the security group)
Question : For which of the following you can apply Multifactor Authentication ?
Ans : 2 Exp :AWS Multi-Factor Authentication (AWS MFA) provides an extra level of security that you can apply to your AWS environment. With AWS MFA enabled, when you sign in to an AWS website, you are prompted for your user name and password, as well as for an authentication code from an MFA device. Taken together, these multiple factors provide increased security for your AWS account settings and resources. You can enable MFA for the root account and for IAM users. For more information, see Using Multi-Factor Authentication (MFA) Devices with AWS in Using IAM.
Question : Speaking about IAM policies, if there are multiple conditions, or if there are multiple keys in a single condition, the conditions are evaluated using a logical
Ans : 2 Exp : If there are multiple conditions, or if there are multiple keys in a single condition, the conditions are evaluated using a logical AND. If a single condition includes multiple values for one key, the condition is evaluated using a logical OR. All conditions must be met for an allow or an explicit deny decision. If a condition isn't met, the result is a deny.
Condition 1: Key1:Valaue1 OR Key2:Value2 OR Key3:Value3 AND Key11:Valaue11 OR Key21:Value2 Condition 2: AND Key31:Valaue31
Question : In DynamoDB you can issue a Scan request. By default, the Scan operation processes data sequentially. DynamoDB returns data to the application in ______ increments , and an application performs additional Scan operations to retrieve the next ___________ of data. 1. 0,1 MB 2. 10 MB 3. Access Mostly Uused Products by 50000+ Subscribers 4. 5 MB
1. A list of any of the stacks you have created. 2. A list of any of the stacks you have created, or have been deleted up to 90 days ago. 3. Access Mostly Uused Products by 50000+ Subscribers 4. A 90 days history list of all your activity on stacks. Ans : 2 Exp : The aws cloudformation list-stacks command enables you to get a list of any of the stacks you have created (even those which have been deleted up to 90 days). You can use an option to filter results by stack status, such as CREATE_COMPLETE and DELETE_COMPLETE. The aws cloudformation list-stacks command returns summary information about any of your running or deleted stacks, including the name, stack identifier, template, and status.
Note
The aws cloudformation list-stacks command returns information on deleted stacks for 90 days after they have been deleted.
Question In regards to VPC, what is the default maximum number of virtual private gateways allowed per region?
Ans : 3 Exp : Amazon VPC Limits *VPCs per region 5 This limit can be increased upon request. *Subnets per VPC 200 This limit can be increased upon request. *Internet gateways per region 5 You can create as many Internet gateways as your VPCs per region limit. Only one Internet gateway can be attached to a VPC at a time. *Virtual private gateways per region 5 Only one virtual private gateway can be attached to a VPC at a time. *Customer gateways per region 50 This limit can be increased upon request. *VPN connections per region 50 Ten per virtual private gateway. *Route tables per VPC 200 Including the main route table. You can associate one route table to one or more subnets in a VPC. *Entries per route table 50 This is the limit for the number of nonpropagated entries per route table. This limit can be increased upon request; however, network performance may be impacted as the number of non propagated route entries increases. *Elastic IP addresses per region for each AWS account 5 This is the limit for the number of VPC Elastic IPs you can allocate within a region. This is a separate limit from the EC2 Elastic IP address limit. *Security groups per VPC 100 This limit can be increased upon request; however, network performance may be impacted as the number of security groups is increased, depending on the way the security groups are configured. *Rules per security group 50 This limit can be increased or decreased upon request, however, the multiple of rules per security group and security groups per network interface cannot exceed 250. For example, if you want 100 rules per security group, wed need to decrease your number of security groups per network interface to 2. *Security groups per network interface 5 This limit can be increased or decreased upon request; however, the multiple of security groups per network interface and rules per security group cannot exceed 250. For example, if you want 10 security groups per network interface, wed need to decrease your number of rules per security group to 25. *Network ACLs per VPC 200 You can associate one network ACL to one or more subnets in a VPC. This limit is not the same as the number of rules per network ACL. *Rules per network ACL 20 This is the sum of the number of rules for both ingress and egress rules in a single network ACL. The maximum limit is 40 rules per network ACL. *BGP Advertised Routes per VPN Connection 100 This limit can be increased upon request; however, network performance may be impacted as the number of advertised routes is increased. *Active VPC peering connections per VPC 50 This limit can be increased via special request to AWS Developer Support. The maximum limit is 125 peering connections per VPC. The number of entries per route table should be increased accordingly; however, network performance may be impacted as the number of entries in a route table is increased. *Outstanding VPC peering connection requests 25 This is the limit for the number of outstanding VPC peering connection requests that you ve requested from your account. This limit can be increased via special request to AWS Developer Support. *Epiry time for an unaccepted VPC peering connection request 1 week (168 hours) This limit can be increased via special request to AWS Developer Support.
Question : Elasticity is a fundamental property of the cloud. What best describes elasticity?
1. Power to scale computing resources up and down easily with minimal friction 2. Ability to create services without having to administer resources 3. Access Mostly Uused Products by 50000+ Subscribers 4. Power to scale computing resources up easily but not down Ans : 1 Exp : Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.
Amazon EC2s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazons proven computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. Amazon EC2 provides developers the tools to build failure resilient applications and isolate themselves from common failure scenarios.
Question Elasticity can be defined as the degree to which an infrastructure is able to adapt to work loads by scaling (provisioning) resources up and down automatically.
1. True 2. False
Ans : 1 Exp : Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.
Amazon EC2s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazons proven computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. Amazon EC2 provides developers the tools to build failure resilient applications and isolate themselves from common failure scenarios.
Question : Elasticity is the ability to easily scale resources up but requires manual intervention to scale resources down.
1. True 2. False
Ans : 2 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.
Amazon EC2s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazons proven computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. Amazon EC2 provides developers the tools to build failure resilient applications and isolate themselves from common failure scenarios.
Question : Scalability is a fundamental property of a good AWS system. What best describes scalability on AWS? 1. Scalability is the concept of planning ahead for what maximum resources will be required and building your infrastructure based on that capacity plan. 2. The law of diminishing returns will apply to resources as they are increased with workload. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Scalability is not a fundamental property of the cloud.
Ans : 3 Exp : Auto Scaling allows you to scale your Amazon EC2 capacity up or down automatically according to conditions you define . With Auto Scaling, you can ensure that the number of Amazon EC2 instances youre using increases seamlessly during demand spikes to maintain performance, and decreases automatically during demand lulls to minimize costs. Auto Scaling is particularly well suited for applications that experience hourly, daily, or weekly variability in usage. Auto Scaling is enabled by Amazon CloudWatch and available at no additional charge beyond Amazon CloudWatch fees.
Question : As your application and infrastructure on AWS grow, pricing should become more cost effective.
1. True 2. False Ans : 1 Even as storage becomes more plentiful and affordable, businesses are still faced with the task of managing their growing storage infrastructure. Amazon Web Services provides a cost-effective solution for storing information in the cloud that eliminates the burden of provisioning and managing hardware. Amazon Simple Storage Service (Amazon S3) provides a highly scalable, reliable, and inexpensive data storage infrastructure that enables you to build dependable backup solutions. Thousands of customers already use Amazon S3 as their backup location, and other customers have created compelling end-user backup, storage, and disaster recovery solutions using AWS.
Question : A scalable AWS infrastructure is considered to be operationally efficient. 1. True 2. False Ans : 1 Exp : Characteristics of a truly scalable application: Increasing resources results in a proportional increase in performance A scalable service is capable of handling heterogeneity A scalable service is operationally efficient A scalable service is resilient A scalable service should become more cost effective when it grows (Cost per unit reduces as the number of units increases)
Question : AWS manages all the scalable infrastructure requirements for all AWS services, leaving no responsibility for the administrator. 1. True 2. False
Ans : 2 Exp : The advent of cloud has changed the role of System Administrator to a Virtual System Administrator. This simply means that daily tasks performed by these administrators have now become even more interesting as they learn more about applications and decide whats best for the business as a whole. The System Administrator no longer has a need to provision servers and install software and wire up network devices since all of that grunt work is replaced by few clicks and command line calls. The cloud encourages automation because the infrastructure is programmable. System administrators need to move up the technology stack and learn how to manage abstract cloud resources using scripts. Likewise, the role of Database Administrator is changed into a Virtual Database Administrator in which he or she manages resources through a web-based console, executes scripts that add new capacity programmatically in case the database hardware runs out of capacity and automates the day to day processes. The virtual DBA has to now learn new deployment methods (virtual machine images), embrace new models (query parallelization, geo-redundancy and asynchronous replication ), rethink the architectural approach for data (sharding , horizontal partitioning , federating ) and leverage different storage options available in the cloud for different types of datasets.
However, administrator still needed
Question : Amazon Auto Scaling is not meant to handle instant load spikes but is built to grow with a gradual increase in usage over a short time period.
1. True 2. False
Ans : 1 Exp : When you use the Amazon Web Services (AWS) Auto Scaling service, you can increase the number of Amazon Elastic Compute Cloud (EC2) instances (cloud servers) youre using automatically when the user demand goes up, and you can decrease the number of EC2 instances when demand goes down. As Auto Scaling dynamically adds and removes EC2 instances, you need to ensure that the traffic coming to your web application is distributed across all of your running EC2 instances. AWS provides Elastic Load Balancing to automatically distribute the incoming web traffic (called the load) among all the EC2 instances that you are running. Elastic Load Balancing manages incoming requests by optimally routing traffic so that no one instance is overwhelmed. Using Elastic Load Balancing with your auto-scaled web application makes it easy to route traffic among your dynamically changing fleet of EC2 instances.
Daily Spikes Valley : pattern is usually observed by ecommerce companies which have peak usage for 12 hours between 8:00 am to 8:00 pm in a day and rest of the day the capacities are usually under utilized. Imagine you are running 20 X m1.large for you web or app tier and they are fully utilized during peak hours. During the non peak period the load decreases gradually and approx 25 % utilization overall is observed in the nights. Since only 25% utilization is observed in nights if you can reduce your capacity to 5 EC2 web or app instances in an automated way it will save infra + labor costs.
Question : VPC has a monthly cost to use. 1. True 2. False
Ans : 2 Exp : There is no additional charge for using Amazon Virtual Private Cloud, aside from the normal Amazon EC2 usage charges.
If you choose to create a Hardware VPN Connection to your VPC using a Virtual Private Gateway, you are charged for each "VPN Connection-hour" that your VPN connection is provisioned and available. Each partial VPN Connection-hour consumed is billed as a full hour. You also incur standard AWS data transfer charges for all data transferred via the VPN Connection. If you no longer wish to be charged for a VPN Connection, you simply terminate your VPN Connection using the AWS Management Console, commandline interface, or API.
Question : Being a QuickTechie.com website developer you are performing a web request to change the storage class of an S object. Which request header should the user use? 1. x-amz-object-class 2. x-amz-storage-class 3. Access Mostly Uused Products by 50000+ Subscribers 4. x-amz-metadata-directive
Ans : 2 Exp : The user can change the storage class of an object which is already stored in Amazon S3 by copying it to the same key name in the same bucket. The user is required to use the following request headers in a PUT Object copy request: " x-amz-metadata-directive set to COPY " x-amz-storage-class set to STANDARD or REDUCED_REDUNDANCY
Question : Select the correct statement for AWS EC? 1. The user has to use only the Elastic IP with an instance store backed AMI instance. 2. The private IP address and public IP address for an instance are not directly mapped to each other 3. Access Mostly Uused Products by 50000+ Subscribers 4. The instance launched from an instance store backed AMI will always have a fixed public DNS for the life of the instance.
Ans : 4 Exp : When a user has launched an EC2 instance, AWS assigns a public and a private DNS to the instance. Both the private and public DNS are mapped using NAT. The EBS backed instance if started / stopped will always have a new public and private DNS. However, an instance store backed AMI will have single only a public DNS throughout the lifecycle as they cannot be re-started / stopped.
Question : HadoopExam.com is a static website with the AWS S bucket. And all the videos training are part of paid subscription, now how you can manage that with static website? 1. The user should enable the requestor pays option for the bucket 2. It is not possible to configure requestor pays for AWS S3 3. Access Mostly Uused Products by 50000+ Subscribers 4. The user should enable Devpay on the bucket
Ans : 3 Exp : With regard to S3, it is not possible for the user to configure the Devpay or requestor pays option when the bucket is enabled for static website hosting
Question : HadoopExam.com is a static website hosting for the AWS S bucket. Which of the below is not correct? 1. It supports redirection 2. It supports Get and HEAD requests 3. Access Mostly Uused Products by 50000+ Subscribers 4. It does not support SSL connections
Ans : 3 Exp :To host a static website, the user needs to configure an Amazon S3 bucket for website hosting and then upload the website content to the bucket. The website endpoint is optimized for access from a web browser. It returns an HTML document as error handling, but not in the XML format.
Question : HadoopExam developer had created a bucket named HadoopExam.com and trying to access the bucket with the URL: http://hadoopexam.s.amazonaws.com What will it return? 1. It will ask the user to provide the bucket access credentials 2. If the bucket is public then it will show the list of all the objects of the bucket 3. Access Mostly Uused Products by 50000+ Subscribers 4. It will return an error
Ans : 4 Exp : The S3 bucket name "HadoopExam.com" has a capital letter. When the user tries to access it with HadoopExam.com, the browser will change it to http://hadoopexam.s3.amazonaws.com. Since a bucket with a small name does not exist it will return an error.
Question : HadoopExam.com developer is creating a bucket policy to allow access to some tester. Which of the below mentioned options is not a valid resource name as a part of the bucket policy?
Ans : 1 Exp : The resource section of an S3 bucket policy accepts ARN. The ARN will have the following format:arn:partition:service:region:namespace:relative-id Here, "aws" is a common partition name. If the user's resources are in the China (Beijing) Region, "aws-cn" is the partition name. The user needs not to specify the region or the namespace. For S3 as a part of the relative ID, the user can specify the bucket or the object name.
Question :As an HadoopExam Developer you are trying to create a policy for an IAM user, who is a member of Quality Assurance team. Which of the below mentioned options is not a valid ARN to use with the policy? 1. arn:aws:sts::123456789012:federated-user/cloud 2. arn:aws:iam::123456789012:federated-user/cloud 3. Access Mostly Uused Products by 50000+ Subscribers 4. arn:aws:iam::123456789012:user/division_abc/subdivision_xyz/cloud Ans : 2 Exp : AWS resources are always identified by ARN. The valid ARN looks like arn:aws:service:region:account:resource. You can use ARNs in the IAM for users (IAM and federated), groups, roles, instance profiles, and virtual MFA devices. In this case, the federated user should come with STS and not IAM as part of resource.
Question : As a developer of HadoopExam.com owner you have created an IAM user with the name hadoopexam. And you wants to give EC2 access of only the US West region to that hadoopexam user. How can the owner configure this? 1. Create an IAM policy and define the region in the condition 2. Create an IAM user in the US West region and give access to EC2 3. Access Mostly Uused Products by 50000+ Subscribers 4. It is not possible to provide access based on the region
Ans : 1 Exp : The IAM policy is never region specific. If the user wants to configure the region specific setting, he needs to provide conditions as part of the policy.
Question : Being a HadoopExam developer you have launched one EC instance in the US East region and one in the US West region. and also launched an RDS instance in the US East region. How can developer configure access from both the EC2 instances to RDS?
1. Configure the US West region's security group to allow a request from the US East region's instance and configure the RDS security group's ingress rule for the US East EC2 group 2. Configure the security group of the US East region to allow traffic from the US West region's instance and configure the RDS security group's ingress rule for the US East EC2 group 3. Access Mostly Uused Products by 50000+ Subscribers 4. Configure the security group of both instances in the ingress rule of the RDS security group
Ans : 2 Exp : The user cannot authorize an Amazon EC2 security group if it is in a different AWS Region than the RDS DB instance. The user can authorize an IP range or specify an Amazon EC2 security group in the same region that refers to an IP address in another region. In this case allow IP of US West inside US East's security group and open the RDS security group for US East region.
Question : Being a HadoopExam.com developer you launched a website on an EC-Classic instance with Apache. And wants to access a DB hosted on another instance how will he allow traffic to access the DB securely from the source webserver instance? 1. Open port 0.0.0.0/0 in the DB instance security group for the DB port 2. Open the inbound DB port on the Webserver instance for the DB server Instance IP 3. Access Mostly Uused Products by 50000+ Subscribers 4. Configure the DB instance security group where it will allow traffic on the DB port from the webserver instance security group
Ans : 4 Exp : When a user is configuring the security group, the user can either specify the CIRD based IP or a security group in the source. Since this is EC2-Classic, by default all outbound traffic is enabled. An Open port for IP 0.0.0.0/0 is not recommended considering the security risk. The best option is to configure a security group of the DB instance where it allows inbound traffic from the Webserver security group. When the user specifies a security group as the source or destination for a rule, the rule affects all the instances associated with that security group.
Question : Being a HadoopExam.com developer you have configured two security groups which allow traffic as given below: 1: SecGrp1: " Inbound on port 80 for 0.0.0.0/0 " Port 22 for 0.0.0.0/0 2: SecGrp2: " Inbound on port 22 for 10.10.10.1/32 If both the security groups are associated with the same instance, which of the below mentioned statements is true? 1. It allows inbound traffic on both port 22 and 80 for everyone 2. It allows inbound traffic on port 22 for IP 10.10.10.1 and for everyone else on port 80 3. Access Mostly Uused Products by 50000+ Subscribers 4. It is not possible to have more than one security group assigned to a single instance
Ans : 1 Exp : A user can attach more than one security group to a single EC2 instance. In this case, the rules from each security group are effectively aggregated to create one set of rules. AWS uses this set of rules to determine whether to allow access or not. Thus, here the rule for port 22 with IP 10.10.10.1/32 will merge with IP 0.0.0.0/0 and open ports 22 and 80 for all.
Question : Being a HadoopExam.com developer you have launched a dedicated EBS backed instance with EC. Where will the EBS of the instance be created?
Ans : 1 Exp : The dedicated instances are Amazon EC2 instances that run in a Virtual Private Cloud (VPC) on hardware that is dedicated to a single customer. When a user launches an Amazon EBS-backed dedicated instance, the EBS volume does not run on single-tenant hardware.
Question : Which of the below mentioned instance types provides a better dedicated IO with EBS?
Ans : 1 Exp : An Amazon EBS-optimized instance uses an optimized configuration stack and provides additional, dedicated capacity for the Amazon EBS I/O. This optimization provides the best performance for the user's Amazon EBS volumes by minimizing contention between the Amazon EBS I/O and other traffic from the user's instance.
Question : Being a HadoopExam.com developer you have launched EC instances inside a placement group and developer first stopped three instances and then started them after 60 minutes. Select the correct statememt for this scenerio..
1. The new instance may be a part of the placement group if there is available capacity at EC2 or else they will run independently 2. The EBS backed instances can never be launched within the placement group 3. Access Mostly Uused Products by 50000+ Subscribers 4. All running instances will still be a part of the same placement group
Ans : 4 Exp : A placement group is a logical grouping of EC2 instances within a single Availability Zone. Placement groups are recommended for applications that benefit from low network latency, high network throughput or both. If the user stops an instance in a placement group and then starts it again, it still runs in the placement group. However, the start fails if enough capacity is not there for the available instance in EC2. In this case if there is no capacity it will not lauch the instances.
Question : Being a HadoopExam.com AWS developer you have launched a Windows instance. The user did not attach the key-pair with the instance while launching. How can the user connect to the same instance? 1. Login with the Administrator and generate a key-pair using RDP 2. The user can login if the Windows Admin password is known 3. Access Mostly Uused Products by 50000+ Subscribers 4. Attach the security key once the instance is launched
Ans : 2 Exp : If a user has launched a Windows EC2 instance without attaching a key-pair he can still connect to that instance if the login ID and password are known to him. For Windows, a user key-pair is required to generate a new Administrator password.
Question : A user has lost a key-pair file for the EBS backed Linux instance. If the user wants to connect to it, how can he connect?
1. Remove the root volume, attach to another instance and generate / modify the key files 2. If the user knows the Linux user name / password the key-pair is not required again 3. Access Mostly Uused Products by 50000+ Subscribers 4. Download the key-pair again from the AWS console and connect to the instance
Ans : 1 Exp : If a user has lost the key-pair file and it is a Linux EC2 instance he can never connect to the instance. The only option is that the user must stop the instance, detach its root volume and attach it to another instance as a data volume, modify the authorized_keys file, move the volume back to the original instance, and restart the instance. This procedure is not supported for instance store-backed instances or instances whose root volume has an AWS Marketplace product code.
Question : A user has lost a key-pair file for the Instance store backed Linux instance. If the user wants to connect to it, how can he connect? 1. It is not possible to connect to the instance without a key-pair 2. Download the key-pair again from the AWS console and connect to the instance 3. Access Mostly Uused Products by 50000+ Subscribers 4. If the user knows the Linux user name / password the key-pair is not required again
Ans : 1 Exp : If a user has lost the key-pair file and it is a Linux EC2 instance the user can never connect to the instance. The only option is that the user must stop the instance, detach its root volume and attach it to another instance as a data volume, modify the authorized_keys file, move the volume back to the original instance, and restart the instance. This procedure is not supported for instance store-backed instances or instances whose root volume has an AWS Marketplace product code
Ans : 2 Exp : The user can use commands, such as: " create-key-pair (AWS CLI) " ec2-create-keypair (Amazon EC2 CLI) or " New-EC2KeyPair (AWS Tools for Windows PowerShell) to create a new key-pair. The user can use ec2-import-keypair to import an own key-pair.
Question : A user has mounted an EBS volume to a Windows instance. Which of the below mentioned options allows the user to unmount the volume from that instance?
Ans : 3 Exp : To detach an Amazon EBS volume using the console the user must unmount it. To unmount the device in Windows, the user should open Disk Management, right-click the volume to unmount, and select Change Drive Letter and Path. Then, select the mount point to remove and click on Remove.
Question : Being a HadoopExam.com AWS developer you have created an EBS volume from an existing snapshot. The data is being loaded lazily from S3 to the volume. If the user tries to access the data of the volume which is not yet loaded, what will happen? 1. The user cannot access the data until all the data is loaded 2. The data is not loaded lazily. Volume is available only when the whole data is loaded 3. Access Mostly Uused Products by 50000+ Subscribers 4. The volume downloads the data from S3 so that the user can access it
Ans : 4 Exp : When the user creates a new Amazon EBS volume, he may create it based on an existing snapshot. The new volume will be created as an exact replica of the original volume that was used to create the snapshot. New volumes created from existing Amazon S3 snapshots load lazily in the background. Thus, the user can start using them immediately. If the instance accesses a piece of data that has not yet been loaded, the volume immediately downloads the requested data from Amazon S3, and then continues loading the rest of the volume's data in the background.
Question : Being a HadoopExam.com AWS developer you have attached an EBS volume with an EBS optimized instance. The volume performs at the 16KB I/O chunk, what is the maximum IOPS that a user can provision to get the optimum output? 1. 4000 2. 5000 3. Access Mostly Uused Products by 50000+ Subscribers 4. 2000
Ans : 1 Exp : IOPS are input/output operations per second. Amazon EBS measures these I/O operations in 16 KB chunks. When the user provisions a 4,000 IOPS volume and attaches it to an EBS-optimized instance that can provide the necessary bandwidth, he can transfer 4,000 16 KB chunks of data per second (for a bandwidth of approximately 64 MBs or 512 Mbps).
Question : Being a HadoopExam.com AWS developer you have defined an AutoScaling termination policy to first delete the instance with the nearest billing hour. AutoScaling has launched 3 instances in the US-East-1A region and 2 instances in the US-West-1B region. One of the instances in the US-East-1B region is running nearest to the billing hour. Which instance will AutoScaling terminate first while executing the termination action?
Ans : 4 Exp : Even though the user has configured the termination policy, before AutoScaling selects an instance to terminate, it first identifies the Availability Zone that has more instances than the other Availability Zones used by the group. Within the selected Availability Zone, it identifies the instance that matches the specified termination policy.
Question : Being a HadoopExam.com AWS developer you have defined an AutoScaling termination policy to first delete the oldest instance. AutoScaling has launched 2 instances in the US-East-1A region and 2 instances in the US-West-1B region. One of the instances in the US-East-1B region is running nearest to the billing hour while the instance in the US-East-1A region is the oldest one. Which instance will AutoScaling terminate first while executing the termination action? 1. Deletes the instance from US-East-1B which is nearest to the running hour 2. Deletes the oldest instance from US-EAST-1AB 3. Access Mostly Uused Products by 50000+ Subscribers 4. Deletes the oldest instance from US-EAST-1A
Ans : 3 Exp : Even though the user has configured the termination policy, before AutoScaling selects an instance to terminate, it first identifies the Availability Zone that has more instances than the other Availability Zones used by the group. If both the zones have the same instance count it will select the zone randomly. Within the selected Availability Zone, it identifies the instance that matches the specified termination policy. In this case it will identify the AZ randomly and then first delete the oldest instance from that zone which matches the termination policy.
Question : Which of the below mentioned commands allows the user to modify the desired capacity for the AutoScaling group? 1. as-set-desired-capacity 2. as-change-desired-capacity 3. Access Mostly Uused Products by 50000+ Subscribers 4. as-update-desired-capacity
Ans : 1 Exp : To execute manual scaling in AutoScaling, the user should modify the desired capacity. AutoScaling will adjust instances as per the requirements. The command as-set-desired-capacity is used to change the size of the AutoScaling group.
Question : After launching an instance that you intend to serve as a NAT (Network Address Translation) device in a public subnet you modify your route tables to have the NAT device be the target of internet bound traffic of your private subnet. When you try and make an outbound connection to the Internet from an instance in the private subnet, you are not successful. Which of the following steps could resolve the issue? 1. Attaching a second Elastic Network interface (ENI) to the NAT instance, and placing it in the private subnet 2. Attaching a second Elastic Network Interface (ENI) to the instance in the private subnet, and placing it in the public subnet 3. Access Mostly Uused Products by 50000+ Subscribers 4. Attaching an Elastic IP address to the instance in the private subnet
Ans : 3 Exp :
Question : Which of the following programming languages have an officially supported AWS SDK? Choose 2 answers A. Perl B. PHP C. Pascal D. Java E. SQL
Ans : 2 Exp : AWS currently offers SDKs for seven different programming languages Java, C#, Ruby, Python, JavaScript, PHP, and Objective C (iOS), and we closely follow the language trends among our customers and the general software community. Since its launch, the Go programming language has had a remarkable growth trajectory, and we have been hearing customer requests for an official AWS SDK with increasing frequency. We listened and decided to deliver a new AWS SDK to our Go-using customers.
Question : How can software determine the public and private IP addresses of the Amazon EC instance that it is running on? 1. Query the appropriate Amazon CloudWatch metric. 2. Use ipconfig or ifconfig command. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Query the local instance metadata.
Ans : 4 Exp : local-ipv4 : The private IP address of the instance. In cases where multiple network interfaces are present, this refers to the eth0 device (the device for which the device number is 0). public-ipv4 The public IP address. If an Elastic IP address is associated with the instance, the value returned is the Elastic IP address.
Question : A startup s photo-sharing site is deployed in a VPC. An ELB distributes web traffic across two subnets. ELB session stickiness is configured to use the AWS-generated session cookie, with a session TTL of 5 minutes. The webserver Auto Scaling Group is configured as: min-size=4, max-size=4. The startups preparing for a public launch, by running load-testing software installed on a single EC2 instance running in us-west-2a. After 60 minutes of load-testing, the webserver logs show: Which recommendations can help ensure load-testing HTTP requests are evenly distributed across the four webservers? Choose 2 answers A. Launch and run the load-tester EC2 instance from us-east-1 instead. B. Re-configure the load-testing software to re-resolve DNS for each web request. C. Use a 3rd-party load-testing service which offers globally-distributed test clients. D. Configure ELB and Auto Scaling to distribute across us-west-2a and us-west-2c. E. Configure ELB session stickiness to use the app-specific session cookie.
Question : Company A has an S bucket containing premier content that they intend to make available to only paid subscribers of their website. The S3 bucket currently has default permissions of all objects being private to prevent inadvertent exposure of the premier content to non-paying website visitors. How can Company A provide only paid subscribers the ability to download a premier content file in the S3 bucket? 1. Apply a bucket policy that grants anonymous users to download the content from the S3 bucket 2. Generate a pre-signed object URL for the premier content file when a paid subscriber requests a download 3. Access Mostly Uused Products by 50000+ Subscribers 4. Enable server side encryption on the S3 bucket for data protection against the nonpaying website visitors
Ans : 2 Exp : All objects by default are private. Only the object owner has permission to access these objects. However, the object owner can optionally share objects with others by creating a pre-signed URL, using their own security credentials, to grant time-limited permission to download the objects. When you create a pre-signed URL for your object, you must provide your security credentials, specify a bucket name, an object key, specify the HTTP method (GET to download the object) and expiration date and time. The pre-signed URLs are valid only for the specified duration. Anyone who receives the pre-signed URL can then access the object. For example, if you have a video in your bucket and both the bucket and the object are private, you can share the video with others by generating a pre-signed URL. Note Anyone with valid security credentials can create a pre-signed URL. However, in order to successfully access an object, the pre-signed URL must be created by someone who has permission to perform the operation that the pre-signed URL is based upon. Time Limited : "Signed URL" is the URL that is valid for a specific period of time. That's why, it is also known as "Time Limited Signed URL" . After the expiry time, the URL will no longer remain active and if user attempts to access the URL once it has expired, he/she will only find some "Request has expired"message. The Signed URL can be generated for all version objects.
Question : What AWS products and features can be deployed by Elastic Beanstalk? Choose answers A. Auto scaling groups B. Route 53 hosted zones C. Elastic Load Balancers D. RDS Instances E. Elastic IP addresses F. SQS Queues
Ans : 3 Exp : With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS cloud without worrying about the infrastructure that runs those applications. AWS Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring. Elastic Beanstalk uses highly reliable and scalable services that are available in the AWS Free Usage Tier such as: " Amazon Elastic Compute Cloud " Amazon Simple Storage Service " Amazon Simple Notification Service " Amazon CloudWatch " Elastic Load Balancing " Auto Scaling " Amazon RDS " Amazon DynamoDB " Amazon CloudFront " Amazon ElastiCache
Question : If an application is storing hourly log files from thousands of instances from a high traffic web site, which naming scheme would give optimal performance on S3? 1. Sequential 2. instanceID_log-HH-DD-MM-YYYY 3. Access Mostly Uused Products by 50000+ Subscribers 4. HH-DD-MM-YYYY-log_instanceID 5. YYYY-MM-DD-HH-log_instanceID
Ans : 4 Exp :
Question : An application stores payroll information nightly in DynamoDB for a large number of employees across hundreds of offices. Item attributes consist of individual name, office identifier, and cumulative daily hours. Managers run reports for ranges of names working in their office. One query is. "Return all Items in this office for names starting with A through E". Which table configuration will result in the lowest impact on provisioned throughput for this query? 1. Configure the table to have a hash index on the name attribute, and a range index on the office identifier 2. Configure the table to have a range index on the name attribute, and a hash index on the office identifier 3. Access Mostly Uused Products by 50000+ Subscribers 4. Configure a hash index on the office Identifier attribute and no range index
Ans : 2 Exp : In terms of the Data Model, the Hash Key allows you to uniquely identify a record from your table, and the Range Key can be optionally used to group and sort several records that are usually retrieved together. Example: If you are defining an Aggregate to store Order Items, the Order Id could be your Hash Key, and the OrderItemId the Range Key. Whenever you would like to search the Order Itemsfrom a particular Order, you just query by the Hash Key (Order Id), and you will get all your order items.
Question : Being a HadoopExam.com AWS developer you are configuring AutoScaling with CLI. Which of the below mentioned adjustment types is not supported by the AutoScaling policy as a part of the command? (e.g. adjustment=50 Type=???)
Ans : 4 Exp : A user can configure the AutoScaling group to automatically scale up and then scale down based on the various specified CloudWatch monitoring conditions. The user needs to provide the adjustment value and the adjustment type. The user can express the change to the current size with the parameter "type". The value for the adjustment type can be either "ExactCapacity", "ChangeInCapacity" or "PercentChangeInCapacity".
Question : Being a HadoopExam.com AWS developer you have created multiple AutoScaling groups. The user is trying to create a new AS group but it fails. How can the user know that he has reached the AS group limit specified by AutoScaling in that region? 1. Run the command: as-max-account-limits 2. Run the command: as-describe-group-limits 3. Access Mostly Uused Products by 50000+ Subscribers 4. Run the command: as-list-account-limits
Ans : 3 Exp : A user can see the number of AutoScaling resources currently allowed for the AWS account either by using the as-describe-account-limits command or by calling the DescribeAccountLimits action.
Question : A user has stored an object in RRS. The object is lost due to an internal AWS failure. What will AWS return when someone queries the object? 1. The object cannot be lost as RRS is highly durable 2. 405 Method Not Allowed error 3. Access Mostly Uused Products by 50000+ Subscribers 4. AWS will serve the object from backup
Ans : 2 Exp : If an object in reduced redundancy storage has been lost, Amazon S3 will return a 405 error on requests made to that object.
Question : Being a HadoopExam.com AWS developer you have created an AWS AMI. The user wants the AMI to be available only to his friend and not anyone else. How can the user manage this? 1. Share the AMI with a friend's AWS account ID. 2. It is not possible to share the AMI with the selected user. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Share the AMI with a friend's AWS login ID.
Ans : 1 Exp : In Amazon Web Services, If a user has created an AMI and wants to share with his friends and colleagues he can share the AMI with their AWS account ID. Once the AMI is shared the other user can access it from the community AMIs under private AMIs options.
Question : Being a HadoopExam.com AWS developer you are sharing the AWS AMI with selected users. Will the new user be able to create a volume from the shared AMI? 1. Yes, provided the owner has given the launch instance permission. 2. Yes, provided the owner has given the create volume permission. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Yes, always
Ans : 2 Exp : In Amazon Web Services, when a user is sharing an AMI with another user, the owner needs to give explicit permission to other users to create a volume from the snapshot. Otherwise the other user cannot create a volume from the snapshot.
Question : An AMI owner has shared his AMI with another user. The other user wants to search the AMI from the AWS console. Which search criteria should the user apply in the console? 1. Provide only the AMI ID. 2. Provide the AMI ID and search for "Shared with me". 3. Access Mostly Uused Products by 50000+ Subscribers 4. Provide the AMI ID and search for private images.
Ans : 4 Exp : In Amazon Web Services, when an owner has shared an AMI with a selected user, the other user can find the AMI with the filter criteria as "AMI ID" + "Private AMIs".
Question : Being a HadoopExam.com AWS developer you are launching a new EBS backed instance. How can the user set it up so that the root EBS volume mounted to the instance does not get deleted when the instance is terminated?
1. Set the a instance attribute deleteontermination value to "true". 2. Set the instance attribute deleteontermination value to "false". 3. Access Mostly Uused Products by 50000+ Subscribers 4. Enable the termination protection on the instance with modify-instance-termination.
Ans : 2 Exp : In Amazon Web Services, when a user launches an EC2 instance, the user can set it up so that the root EBS volume does not get deleted when the instance is terminated. This can be achieved by setting the "deleteontermination" attribute to false.
Question : Being a QuickTechie.com AWS developer you have launched an EBS backed instance. Can the user configure the instance, so that future instances will have ephermal storage while creating the AMI? 1. No 2. Yes, provided the AMI is instance store backed 3. Access Mostly Uused Products by 50000+ Subscribers 4. Yes, always
Ans : 3 Exp : In Amazon Web Services, when a user is creating an AMI from an EBS backed instance, the user can configure to have ephermal storage attached to future instances from the new AMI. This ephermal storage is available for EBS backed AMIs provided the instance size is not micro. All other instance types will have the ephermal device attached on instance launch if they are configured during AMI creation. The new ephermal storage will always be empty.
Question : Being a QuickTechie.com AWS developer you have launched an EC instance under the free usage tier. The user wants to create large instances by creating the AMI from the same instance. Can the large instances automatically have ephermal storage attached with them? 1. No, The EBS backed instance can never have ephermal storage 2. Yes, provided the user configured to have ephermal storage during the AMI creation 3. Access Mostly Uused Products by 50000+ Subscribers 4. No, the AMI created from the micro instance can never have ephermal storage
Ans : 2 Exp : When a user is creating an AMI from an EBS backed instance, the user can configure to have ephermal storage attached to future instances from the new AMI. This ephermal storage is available for EBS backed AMIs provided the instance size is not micro. All other instance types will have the ephermal device attached on instance launch if they are configured during AMI creation.
Question : Being a QuickTechie.com AWS developer you have created a bucket and is trying to access the object using the public URL of the object. Which of the below mentioned statements is false for accessing the object using the REST API endpoint? 1. It returns the response in an XML format 2. It supports all the object and bucket functions with REST 3. Access Mostly Uused Products by 50000+ Subscribers 4. It supports the redirect request
Ans : 4 Exp : There is a difference between the S3 REST API end point and the S3 website hosting enabled end point: the REST API end point does not support redirect requests
Question : You have been added as an IAM user and trying to perform an action on an object belonging to some other root account's bucket. Which of the below mentioned options will AWS S3 not verify? 1. Permission provided by the parent of the IAM user on the bucket 2. Permission provided by the bucket owner to the IAM user 3. Access Mostly Uused Products by 50000+ Subscribers 4. Permission provided by the parent of the IAM user
1. It will result in an error saying invalid policy statement 2. Allows the user test of the AWS account ID 3377 to perform GetBucketLocation, ListBucket and GetObject on the bucket hadoopexam 3. Access Mostly Uused Products by 50000+ Subscribers 4. It will allow all the IAM users of the account ID 3377 to perform GetBucketLocation, ListBucket and GetObject on bucket hadoopexam