Question : QuickTechie.com has hosted a tomcat based web application on AWS EC and opened port for the selected IPs and port for everyone else. The organization has noticed that over the weekend their AWS usage increased by a few hundred dollars because there was data transfer in the range of 50-60 TB that happened during the week end. The organization did not run any special program which could cause this transfer. What could be the potential source for a breach in the security? 1. QuickTechie.com might have enabled UDP ports for data transfer. 2. QuickTechie.com might have enabled TCP ports for data transfer. 3. Access Mostly Uused Products by 50000+ Subscribers 4. QuickTechie.com might not have changed the default admin password of the tomcat manager.
Correct Answer : Get Lastest Questions and Answer : Explanation: AWS security follows the shared security model where the user is as much responsible as Amazon. AWS recommends that each organization should manage their Ec2 security groups carefully as people open ports for everyone. In this scenario the organization opened only port 80 for all but port 22 was opened only for selected IPs. This will prevent any unnecessary traffic from the instance or web application. One root cause can be that the organization did not change the default admin password of the tomcat manager. In this scenario someone can deploy their own app on the EC2 instance using the admin password of tomcat. Subsequently, that application can cause unnecessary traffic.
Question : QuickTechie.com provides scalable and secure SAAS to its clients. They are planning to host a web server and App server on AWS VPC as separate tiers. The organization wants to implement the scalability by configuring Auto Scaling and load balancer with their app servers (middle tier) too. Which of the below mentioned options suits their requirements? 1. The user should make ELB with EC2-CLASSIC and enable SSH with it for security. 2. Since ELB is internet facing, it is recommended to setup HAProxy as the Load balancer within the VPC. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Create an Internal Load balancer with VPC and register all the App servers with it.
Explanation: When you create your load balancer in VPC, you can make your load balancer internal (private) or Internet-facing (public). When you make your load balancer internal, a DNS name will be created, and it will contain the private IP address of the load balancer. Internal load balancer is not exposed to the internet. When you make your load balancer Internet-facing, a DNS name will be created with the public IP address. The DNS records are publicly resolvable in both cases.
By combining both internal and Internet-facing load balancers, you can balance requests between multiple tiers of your application. For example, let us say you have web servers at your front-end that takes requests from the internet and passes it on to your back-end application instances. You can create an internal load balancer in your VPC and then place your back-end application instances behind the internal load balancer. You can create an Internet-facing load balancer with the DNS name and public IP address and place it in front of your web server. Your web server will take requests coming from the Internet-facing load balancer and will make requests to the internal load balancer, using private IP addresses that are resolved from the internal load balancer's DNS name. The internal load balancer will route requests to the back-end application instances, which are also using private IP addresses and only accept requests from the internal load balancer. With this multi-tier architecture, all your infrastructure can use private IP addresses and security groups so that the only part of your architecture that has public IP address is the Internet-facing load balancer.
For an Internet-facing load balancer to be connected to the Internet, the load balancer must reside in a subnet that is connected to the Internet using the Internet gateway. The application instances behind the load balancer do not need to be in the same subnet as the load balancer.
Question : QuickTechie.com is trying to setup AWS VPC with Auto Scaling. Which of the below mentioned steps is not required to be configured by the organization to setup AWS VPC? 1. Configure the Auto Scaling group with the VPC ID in which instances will be launched. 2. Configure the Auto Scaling Launch configuration with the VPC security group. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Configure the Auto Scaling Launch configuration which does not allow assigning a public IP to instances.
Correct Answer : Get Lastest Questions and Answer : Explanation: Amazon Virtual Private Cloud (Amazon VPC) enables you to define a virtual networking environment in a private, isolated section of the AWS cloud. You have complete control over your virtual networking environment. For more information, see the Amazon VPC User Guide.
Within a virtual private cloud (VPC), you can launch AWS resources such as an Auto Scaling group. An Auto Scaling group in a VPC works essentially the same way as it does on Amazon EC2 and supports the same set of features. This section provides you with an overview of Auto Scaling groups in a VPC and steps you through the process of creating an Auto Scaling group in a VPC. If you want to launch your Auto Scaling instances in Amazon EC2, see Getting Started with Auto Scaling.
Before you can create your Auto Scaling group in a VPC, you must first configure your VPC environment. You create your VPC by specifying a range of IP addresses in the classless inter-domain routing (CIDR) range of your choice (for example, 10.0.0.0/16). For more information about CIDR notation and what "/16" means, go to Classless Inter-Domain Routing on Wikipedia.
You can create a VPC that spans multiple Availability Zones then add one or more subnets in each Availability Zone. A subnet in Amazon VPC is a subdivision within an Availability Zone defined by a segment of the IP address range of the VPC. Using subnets, you can group your instances based on your security and operational needs. A subnet resides entirely within the Availability Zone it was created in. You launch Auto Scaling instances within the subnets.
To enable communication between the Internet and the instances in your subnets, you must create an Internet gateway and attach it to your VPC. An Internet gateway enables your resources within the subnets to connect to the Internet through the Amazon EC2 network edge. If a subnet's traffic is routed to an Internet gateway, the subnet is known as a public subnet. If a subnet's traffic is not routed to an Internet gateway, the subnet is known as a private subnet. Use a public subnet for resources that must be connected to the Internet, and a private subnet for resources that need not be connected to the Internet. The Amazon Virtual Private Cloud (Amazon VPC) allows the user to define a virtual networking environment in a private, isolated section of the Amazon Web Services (AWS) cloud. The user has complete control over the virtual networking environment. Within this virtual private cloud, the user can launch AWS resources, such as an Auto Scaling group. Before creating the Auto Scaling group it is recommended that the user creates the Launch configuration. Since it is a VPC, it is recommended to set the parameter which does not allow assigning a public IP to the instances. The user should also set the VPC security group with the Launch configuration and select the subnets where the instances will be launched in the AutoScaling group. The HA will be provided as the subnets may be a part of separate AZs.
1. Serve user content from S3. CloudFront and use Route53 latency-based routing between ELBs in each region Retrieve user preferences from a local DynamoDB table in each region and leverage SQS to capture changes to user preferences with SOS workers for propagating updates to each table. 2. Use the S3 Copy API to copy recently accessed content to multiple regions and serve user content from S3. CloudFront with dynamic content and an ELB in each region Retrieve user preferences from an ElasticCache cluster in each region and leverage SNS notifications to propagate user preference changes to a worker node in each region. 3. Access Mostly Uused Products by 50000+ Subscribers user content from S3 CloudFront and Route53 latency-based routing Between ELBs In each region Retrieve user preferences from a DynamoDB table and leverage SQS to capture changes to user preferences with SOS workers for propagating DynamoDB updates. 4. Serve user content from S3. CloudFront with dynamic content, and an ELB in each region Retrieve user preferences from an ElastiCache cluster in each region and leverage Simple Workflow (SWF) to manage the propagation of user preferences from a centralized DB to each ElastiCache cluster.
1. Use RDS Multi-AZ with two tables, one for "Active calls" and one for "Terminated calls". In this way the "Active calls" table is always small and effective to access. 2. Use DynamoDB with a "Calls" table and a Global Secondary Index on a "IsActive'" attribute that is present for active calls only. In this way the Global Secondary index is sparse and more effective. 3. Access Mostly Uused Products by 50000+ Subscribers that can equal to "active" or "terminated" in this way the Global Secondary index can be used for all Items in the table. 4. Use RDS Multi-AZ with a "CALLS" table and an Indexed "STATE* field that can be equal to 'ACTIVE" or "TERMINATED" In this way the SOL query Is optimized by the use of the Index.
1. Use an Amazon Elastic Map Reduce (EMR) S3DistCp as a synchronization mechanism between the on-premises database and a Hadoop cluster on AWS. 2. Modify the application to write to an Amazon SQS queue and develop a worker process to flush the queue to the on-premises database. 3. Access Mostly Uused Products by 50000+ Subscribers function to write to the on-premises database. 4. Provision an RDS read-replica database on AWS to handle the writes and synchronize the two databases using Data Pipeline.