Question : You are responsible for a legacy web application whose server environment is approaching end of life You would like to migrate this application to AWS as quickly as possible, since the application environment currently has the following limitations: The VM's single 10GB VMDK is almost full Me virtual network interface still uses the 10Mbps driver, which leaves your 100Mbps WAN connection completely underutilized It is currently running on a highly customized. Windows VM within a VMware environment: You do not have me installation media This is a mission critical application with an RTO (Recovery Time Objective) of 8 hours. RPO (Recovery Point Objective) of 1 hour. How could you best migrate this application to AWS while meeting your business continuity requirements?
1. Use the EC2 VM Import Connector for vCenter to import the VM into EC2. 2. Use Import/Export to import the VM as an EBS snapshot and attach to EC2. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Use me ec2-bundle-instance API to Import an Image of the VM into EC2
Answer: 1
Explanation: You import the VMDK and send AWS to load the same (It does not seems to a good solution for 10GB VMDK file). VM Import/Export enables you to easily import virtual machine images from your existing environment to Amazon EC2 instances and export them back to your on-premises environment. This offering allows you to leverage your existing investments in the virtual machines that you have built to meet your IT security, configuration management, and compliance requirements by bringing those virtual machines into Amazon EC2 as ready-to-use instances. You can also export imported instances back to your on-premises virtualization infrastructure, allowing you to deploy workloads across your IT infrastructure.
VM Import/Export is available at no additional charge beyond standard usage charges for Amazon EC2 and Amazon S3. To import your images, use the AWS CLI or other developer tools to import a virtual machine (VM) image from your VMware environment. If you use the VMware vSphere virtualization platform, you can also use the AWS Management Portal for vCenter to import your VM. As part of the import process, VM Import will convert your VM into an Amazon EC2 AMI, which you can use to run Amazon EC2 instances. Once your VM has been imported, you can take advantage of Amazon's elasticity, scalability and monitoring via offerings like Auto Scaling, Elastic Load Balancing and CloudWatch to support your imported images. You can export previously imported EC2 instances using the Amazon EC2 API tools. You simply specify the target instance, virtual machine file format and a destination S3 bucket, and VM Import/Export will automatically export the instance to the S3 bucket. You can then download and launch the exported VM within your on-premises virtualization infrastructure.
Question : You are migrating a legacy client-server application to AWS The application responds to a specific DNS domain (e g www example com) and has a 2-tier architecture, with multiple application servers and a database server Remote clients use TCP to connect to the application servers. The application servers need to know the IP address of the clients in order to function properly and are currently taking that information from the TCP socket A Multi-AZ RDS MySQL instance will be used for the database. During the migration you can change the application code but you have to file a change request. How would you implement the architecture on AWS In order to maximize scalability and high ability?
1. File a change request to implement Proxy Protocol support In the application Use an ELB with a TCP Listener and Proxy Protocol enabled to distribute load on two application servers in different AZs. 2. File a change request to Implement Cross-Zone support in the application Use an ELB with a TCP Listener and Cross-Zone Load Balancing enabled, two application servers in different AZs. 3. Access Mostly Uused Products by 50000+ Subscribers Use Route 53 with Latency Based Routing enabled to distribute load on two application servers in different AZs. 4. File a change request to implement Alias Resource support in the application Use Route 53 Alias Resource Record to distribute load on two application servers in different AZs.
Answer: 1 Explanation: Proxy Protocol is an Internet protocol used to carry connection information from the source requesting the connection to the destination for which the connection was requested. Elastic Load Balancing uses Proxy Protocol version 1, which uses a human-readable header format.
By default, when you use Transmission Control Protocol (TCP) or Secure Sockets Layer (SSL) for both front-end and back-end connections, your load balancer forwards requests to the back-end instances without modifying the request headers. If you enable Proxy Protocol, a human-readable header is added to the request header with connection information such as the source IP address, destination IP address, and port numbers. The header is then sent to the back-end instance as part of the request.
You can enable Proxy Protocol on ports that use either the SSL and TCP protocols. You can use Proxy Protocol to capture the source IP of your client when you are using a non-HTTP protocol, or when you are using HTTPS and not terminating the SSL connection on your load balancer.
Question : Your department creates regular analytics reports from your company's log files All log data is collected in Amazon S and processed by daily Amazon Elastic MapReduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse. Your CFO requests that you optimize the cost structure for this system. Which of the following alternatives will lower costs without compromising average performance of the system or data integrity for the raw data? 1. Use reduced redundancy storage (RRS) for PDF and csv data in Amazon S3. Add Spot instances to Amazon EMR jobs Use Reserved Instances for Amazon Redshift. 2. Use reduced redundancy storage (RRS) for all data in S3. Use a combination of Spot instances and Reserved Instances for Amazon EMR jobs use Reserved instances fors Amazon Redshift. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Use reduced redundancy storage (RRS) for PDF and csv data in S3. Add Spot Instances to EMR jobs Use Spot Instances for Amazon Redshift.
Answer: 1
Explanation: Data which can be generated again we can use RRS S3 storage but not for all the data. Like logs can not be regenerated. Hence option 2 and 3 is out. Get the Best Value for Amazon EC2 Capacity Spot instances provide the reliability, security, performance, control, and elasticity of Amazon EC2, at low market-driven prices that decrease even further when demand subsides. Reduce Operating Costs Reduce your operating costs by 50-90% with Spot, compared to On-Demand instances. Amazon EC2 Spot instances are spare EC2 instances that you can bid on to run your cloud computing applications. Since Spot instances are often available at a lower price, you can significantly reduce the cost of running your applications, grow your application's compute capacity and throughput for the same budget, and enable new types of cloud computing applications.
So for EMR : Spot instabces. An Amazon Redshift data warehouse is an enterprise-class relational database query and management system.
Amazon Redshift supports client connections with many types of applications, including business intelligence (BI), reporting, data, and analytics tools.
When you execute analytic queries, you are retrieving, comparing, and evaluating large amounts of data in multiple-stage operations to produce a final result.
Amazon Redshift achieves efficient storage and optimum query performance through a combination of massively parallel processing, columnar data storage, and very efficient, targeted data compression encoding schemes.
1. Enable Cloud Front to deliver access logs to S3 and use them as input of the Elastic Map Reduce job. 2. Turn on Cloud Trail and use trail log tiles on S3 as input of the Elastic Map Reduce job 3. Access Mostly Uused Products by 50000+ Subscribers 4. Use Elastic Beanstalk "Rebuild Environment" option to update log delivery to the Elastic Map Reduce job. 5. Use Elastic Beanstalk 'Restart App server(s)" option to update log delivery to the Elastic Map Reduce job.
1. Create IAM users in the Master account with full Admin permissions. Create crossaccount roles in the Dev and Test accounts that grant the Master account access to the resources in the account by inheriting permissions from the Master account. 2. Create IAM users and a cross-account role in the Master account that grants full Admin permissions to the Dev and Test accounts. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Link the accounts using Consolidated Billing. This will give IAM users in the Master account access to resources in the Dev and Test accounts