Question : There is a Windows XP client, which has ephemeral port range -, and a request initiated to a web server in your VPC from this Windows XP client over the Internet, which of the following statements are correct for serving request to that client ?
1. Your network ACL must have an outbound rule to enable traffic destined for ports 1025-5000 2. Your network ACL must have an inbound rule to enable traffic destined for ports 1025-5000 3. Your network ACL must have an inbound and outbound rule to enable traffic destined for ports 1025-5000 4. All of the above
Correct Answer : 1
Ephemeral Ports The Network ACL uses an ephemeral port range of 49152-65535. However, you might want to use a different range for your network ACLs. This section explains why.
The client that initiates the request chooses the ephemeral port range. The range varies depending on the client's operating system. Many Linux kernels (including the Amazon Linux kernel) use ports 32768-61000. Requests originating from Elastic Load Balancing use ports 1024-65535. Windows operating systems through Windows Server 2003 use ports 1025-5000. Windows Server 2008 uses ports 49152-65535. Therefore, if a request comes in to a web server in your VPC from a Windows XP client on the Internet, your network ACL must have an outbound rule to enable traffic destined for ports 1025-5000.
If an EC2 instance in your VPC is the client initiating a request, your network ACL must have an inbound rule to enable traffic destined for the ephemeral ports specific to the type of instance (Amazon Linux, Windows Server 2008, and so on.).
In practice, to cover the different types of clients that might initiate traffic to public-facing instances in your VPC, you need to open ephemeral ports 1024-65535. However, you can also add rules to the ACL to deny traffic on any malicious ports within that range. Make sure to place the DENY rules earlier in the table than the rule that opens the wide range of ephemeral ports.
Question : Which of the following statement is true about AWS DynamoDB Global Secondary Indexes A. They can be added on to existing tables B. They have their own provisioned throughput C. They can have different partition and sort keys from the parent table D. Must be created when we create the table. 1. A,B,C 2. B,C,D 3. A,C,D 4. A,B,D
Correct Answer : 1 Explanation: Secondary Indexes: Local and Global There are two types of secondary indexes, local and global, and they have slightly different characteristics.
Local Secondary Indexes
These are probably the easiest to understand because they share their table's partition key, but give us the option to have more sort keys. In fact, local indexes give us the option to have up to 5 more sort keys because you can create up to 5 local secondary indexes. This is in addition to the sort key you already have on the table, for a total of 6 sort keys.
Be aware that local secondary indexes share provisioned throughput (read/write capacity) with their parent table. We need to understand this when allocating read and write capacity on tables.
Local indexes must be created when we create the table. We cannot add them after the table is created, nor can we delete them! Plan this out carefully.
Global Secondary Indexes
Global indexes have a few major differences compared to local indexes:
They can be added on to existing tables They have their own provisioned throughput They can have different partition and sort keys from the parent table The first difference can give us a lot of flexibility. Sometimes our needs change as our data or traffic grows, and having the ability to add indexes as we need them is a big bonus.
The second difference can completely change how you calculate the necessary read and write capacity units for a table and index, and can also make a difference in cost.
The third difference again gives us greater flexibility. Whereas with local secondary indexes we had to have a composite key (partition key + sort key), and we had to use the same partition key, global indexes completely change that. We can have a simple primary key (just a partition key) or a composite key, and they can be completely different from that of the table's keys. That's why they're called global - because queries on the index can span all of the data in a table, across all partitions.
Question : If you are using DB instance that uses Provisioned IOPS storage , will you be charged for I/Os as well? 1. Only above 1 Million/Week. 2. Only above 1 Billion/month. 3. No, there is no charge for I/Os 4. Yes, you will be charged for each I/Os
Correct Answer : 3
Provisioned IOPS Storage Costs Because Provisioned IOPS storage reserves resources for your use, you are charged for the resources whether or not you use them in a given month. When you use Provisioned IOPS storage, you are not charged the monthly Amazon RDS I/O charge. If you prefer to pay only for I/O that you consume, a DB instance that uses standard storage may be a better choice.
1. 5 2. 200 3. Access Mostly Uused Products by 50000+ Subscribers 4. No limit Ans : 1 Exp : You can create as many Internet gateways as your VPCs per region limit. Only one Internet gateway can be attached to a VPC at a time.
Question : Which one of the following statement is incorrect:
1. AWS Marketplace is the simplest way for developers to get paid for Amazon AMIs or applications they build on top of Amazon S3. 2. AWS Marketplace supports EBS-backed software, where DevPay does not. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Software providers benefit from AWS Marketplace's marketing outreach and ease of discovery. Ans : 1 Exp : Amazon DevPay is the simplest way for developers to get paid for Amazon EC2 Machine Images (AMIs) or applications they build on top of Amazon S3. Developers use the simple Amazon DevPay web interface to register their application or AMI with Amazon DevPay and configure their desired pricing. They embed the Amazon DevPay purchase pipeline link into their website to allow their customers to purchase their product. Amazon DevPay allows developers to start selling their application without using complex APIs or writing code to build an order pipeline or a billing system.
Amazon DevPay is the only payments application that automatically meters your customers' usage of Amazon Web Services (such as Amazon S3 or Amazon EC2) and allows you to charge your customers for that usage at whatever price you choose. Amazon DevPay provides you the flexibility to charge for your application based on any combination of a one-time fixed fee, a recurring monthly fee, or fees based on the monthly usage of underlying AWS services. Amazon DevPay also provides account management functions that you'd otherwise have to build and manage yourself. Amazon DevPay keeps track of all your customers' subscriptions and their associated status. When customers request access to your application, Amazon DevPay authenticates these customers and determines whether they have the requisite credentials and payment standing to use your application. Amazon DevPay also provides you with business reports to view revenue, cost and AWS service usage by customer. Amazon DevPay shares the risk of customer nonpayment with developers. You're responsible for the cost of AWS services that a customer consumes only up to the amount that the customer actually pays. If a customer does not pay, we do not charge you these costs
Question : A company is running a batch analysis every hour on their main transactional DB. Transactional DB running on an RDS MySQL instance. To populate their central Data Warehouse running on Redshift. During the execution of the batch their transactional applications are very slow. When the batch completes they need to update the top management dashboard with the new data . The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is required. The on-premises system cannot be modified because is managed by another team. How would you optimize this scenario to solve performance issues and automate the process as much as possible?
1. Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard 2. Replace RDS with Redshift for the batch analysis and SQS to send a message to the on-premises system to update the dashboard 3. Access Mostly Uused Products by 50000+ Subscribers 4. Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.
1. Increased 2. Reduced 3. Access Mostly Uused Products by 50000+ Subscribers 4. Allowed Ans : 4 Explanation:Your VPC includes a default security group whose initial rules are to deny all inbound traffic, allow all outbound traffic, and allow all traffic between instances in the group. You can't delete this group; however, you can change the group's rules. The procedure is the same as modifying any other security group.
Question : Which of the following two components Elastic Load Balancing (ELB) consists of
1. Load Balancer AND Load Monitoring Service 2. Load Distribution Controller AND Load Monitoring Service 3. Access Mostly Uused Products by 50000+ Subscribers 4. Controller Service AND Load Balancer Ans : 4 Elastic Load Balancing (ELB) consists of two components: the load balancers and the controller service. The load balancers monitor the traffic and handle requests that come in through the Internet. The controller service monitors the load balancers, adding and removing load balancers as needed and verifying that the load balancers are functioning properly.
You have to create your load balancer before you can start using it. Elastic Load Balancing automatically generates a unique Domain Name System (DNS) name for each load balancer instance you create. For example, if you create a load balancer named myLB in the us-east-1a, your load balancer might have a DNS name such as myLB-1234567890.us-east-1.elb.amazonaws.com. Clients can request access your load balancer by using the ELB generated DNS name.
If you'd rather use a user-friendly domain name, such as www.example.com, instead of the load balancer DNS name, you can create a custom domain name and then associate the custom domain name with the load balancer DNS name. When a request is placed to your load balancer using the custom domain name that you created, it resolves to the load balancer DNS name.
When a client makes a request to your application using either your load balancer's DNS name or the custom domain name, the DNS server returns one or more IP addresses. The client then makes a connection to your load balancer at the provided IP address. When Elastic Load Balancing scales, it updates the DNS record for the load balancer. The DNS record for the load balancer has the time-to-live (TTL) set to 60 seconds. This setting ensures that IP addresses can be re-mapped quickly to respond to events that cause Elastic Load Balancing to scale up or down.
When you create a load balancer, you must configure it to accept incoming traffic and route requests to your EC2 instances. The controller ensures that load balancers are operating with the correct configuration.
Question : A load balancer is the destination to which all requests intended for your load balanced application should be directed. Each load balancer can distribute requests to multiple EC2 instances. A load balancer is represented by
1. Multiple Availability Zones and EC2 Region 2. A DNS name and a set of ports 3. Access Mostly Uused Products by 50000+ Subscribers 4. None of the above Ans : 2 Exp :A load balancer is the destination to which all requests intended for your load balanced application should be directed. Each load balancer can distribute requests to multiple EC2 instances. A load balancer is represented by a DNS name and a set of ports. Load balancers can span multiple Availability Zones within an EC2 Region, but they cannot span multiple regions.
To create or work with a load balancer in a specific region, use the corresponding regional service endpoint.
Elastic Load Balancing automatically generates a DNS name for each load balancer instance you create. Typically, the DNS name includes the name of the AWS region in which the load balancer is created. For example, if you create a load balancer named myLB in the us-east-1a, your load balancer might have a DNS name such as myLB-1234567890.us-east-1.elb.amazonaws.com.
Question : By default, a load balancer routes each request independently to the application instance with the
1. All requests coming from the user during the session will be sent to the same application instance. 2. A load balancer routes each request independently to the application instance with the smallest load 3. Access Mostly Uused Products by 50000+ Subscribers 4. While setting up the ELB you must have to define Distribution Algorithm, there is no default behaviour. Ans : 2 Exp : Sticky Sessions
By default, a load balancer routes each request independently to the application instance with the smallest load. However, you can use the sticky session feature (also known as session affinity), which enables the load balancer to bind a user's session to a specific application instance. This ensures that all requests coming from the user during the session will be sent to the same application instance.
The key to managing the sticky session is determining how long your load balancer should consistently route the user's request to the same application instance. If your application has its own session cookie, then you can set Elastic Load Balancing to create the session cookie to follow the duration specified by the application's session cookie. If your application does not have its own session cookie, then you can set Elastic Load Balancing to create a session cookie by specifying your own stickiness duration. You can associate stickiness duration for only HTTP/HTTPS load balancer listeners.
An application instance must always receive and send two cookies: A cookie that defines the stickiness duration and a special Elastic Load Balancing cookie named AWSELB, that has the mapping to the application instance.
Question : You have just implemented ELB in front of fleet of EC servers on which your website is hosted. However, there are some EC instances keep failing once in a month on average. Which of the following are ways by which ELB to find which all instances are not serving? A. ELB can send a page request to find whether this server is responding or not B. ELB can ping the server to find whether it is alive or not C. ELB will try to login to website using anonymous user and with default password set by admin D. ELB will try to make connection with the EC2 instance
If you have a device that isn't in the preceding list of tested devices, this section describes the requirements the device must meet for you to use it with Amazon VPC. The following lists the requirement the customer gateway must adhere to, the related RFC (for reference), and comments about the requirement.
To provide context for the following requirements, think of each VPN connection as consisting of two separate tunnels. Each tunnel contains an IKE Security Association, an IPsec Security Association, and a BGP Peering. Note that you are limited to 2 Security Associations (SAs), one inbound and one outbound. Some devices use policy-based VPN and will create as many SAs as ACL entries. Therefore, you may need to consolidate your rules and then filter so you don't permit unwanted traffic.
The VPN tunnel comes up when traffic is generated from your side of the VPN connection. The AWS endpoint is not the initiator; your customer gateway must initiate the tunnels.
Utilize IPsec Dead Peer Detection The use of Dead Peer Detection enables the VPN devices to rapidly identify when a network condition prevents delivery of packets across the Internet. When this occurs, the gateways delete the Security Associations and attempt to create new associations. During this process, the alternate IPsec tunnel is utilized if possible.
Question : Which is the wrong statement regarding "Security Group" in VPC
1. Operates at the instance level (first layer of defense) 2. Supports allow rules only 3. Access Mostly Uused Products by 50000+ Subscribers 4. Is stateless: Return traffic must be explicitly allowed by rules 5. It evaluate all rules before deciding whether to allow traffic
Ans : 4 Exp :The following table summarizes the basic differences between security groups and network ACLs.
Security Group Operates at the instance level (first layer of defense) Supports allow rules only Is stateful: Return traffic is automatically allowed, regardless of any rules We evaluate all rules before deciding whether to allow traffic Applies to an instance only if someone specifies the security group when launching the instance, or associates the security group with the instance later on
Network ACL Operates at the subnet level (second layer of defense) Supports allow rules and deny rules Is stateless: Return traffic must be explicitly allowed by rules We process rules in number order when deciding whether to allow traffic Automatically applies to all instances in the subnets it's associated with (backup layer of defense, so you don't have to rely on someone specifying the security group)
Question : You are working with AWS resources e.g. S , RDS and Amazon Glacier. Now, you will be interacting these resources in a controlled manner and these all access controlled are defined in AWS IAM policy. Which all you can define in the IAM policy? A. User name and Password, which has access to AWS resources e.g. S3, Glacier B. Region specific to the user C. Actions what all user can do D. Service names on which user has permissions