Question : QuickTechie.com has hosted all the applications in the AWS VPC and his security partner is at a remote place and wants to have access to AWS to view all the VPC records. How can the organization meet the expectations of the partner without compromising on the security of their AWS infrastructure? 1. The organization should not accept the request as sharing the credentials means compromising on security. 2. Create an IAM user who will have read only access to the AWS VPC and share those credentials with the auditor. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Create an IAM role which will have read only access to all EC2 services including VPC and assign that role to the auditor .
Correct Answer : Get Lastest Questions and Answer : Explanation: Your security credentials identify you to services in AWS and grant you unlimited use of your AWS resources, such as your Amazon VPC resources. You can use AWS Identity and Access Management (IAM) to allow other users, services, and applications to use your Amazon VPC resources without sharing your security credentials. You can choose to allow full use or limited use of your resources by granting users permission to use specific Amazon EC2 API actions. Some API actions support resource-level permissions, which allow you to control the specific resources that users can create or modify.The following policy grants users permission to create and manage your VPC. You might attach this policy to a group of network administrators. The Action element specifies the API actions related to VPCs, subnets, Internet gateways, customer gateways, virtual private gateways, VPN connections, route tables, Elastic IP addresses, security groups, network ACLs, and DHCP options sets. The policy also allows the group to run, stop, start, and terminate instances. It also allows the group to list Amazon EC2 resources. { "Version": "2012-10-17", "Statement":[{ "Effect":"Allow", "Action":["ec2:*Vpc*", "ec2:*Subnet*", "ec2:*Gateway*", "ec2:*Vpn*", "ec2:*Route*", "ec2:*Address*", "ec2:*SecurityGroup*", "ec2:*NetworkAcl*", "ec2:*DhcpOptions*", "ec2:RunInstances", "ec2:StopInstances", "ec2:StartInstances", "ec2:TerminateInstances", "ec2:Describe*"], "Resource":"*" } ] }
The policy uses wildcards to specify all actions for each type of object (for example, *SecurityGroup*). Alternatively, you could list each action explicitly. If you use the wildcards, be aware that if we add new actions whose names include any of the wildcarded strings in the policy, the policy would automatically grant the group access to those new actions.
Question : Your corporate headquarters in New York can have an AWS Direct Connect connection to the VPC and your branch offices can use VPN connections to the VPC. The branch offices in Los Angeles and Miami can send and receive data with each other and with your corporate headquarters, all using the AWS________ 1. VPN 2. VPN CloudHub 3. Access Mostly Uused Products by 50000+ Subscribers 4. VNC
Explanation: f you have multiple VPN connections, you can provide secure communication between sites using the AWS VPN CloudHub. The VPN CloudHub operates on a simple hub-and-spoke model that you can use with or without a VPC. This design is suitable for customers with multiple branch offices and existing Internet connections who'd like to implement a convenient, potentially low-cost hub-and-spoke model for primary or backup connectivity between these remote offices.To use the AWS VPN CloudHub, you must create a virtual private gateway with multiple customer gateways, each with unique Border Gateway Protocol (BGP) Autonomous System Numbers (ASNs). Customer gateways advertise the appropriate routes (BGP prefixes) over their VPN connections. These routing advertisements are received and re-advertised to each BGP peer, enabling each site to send data to and receive data from the other sites. The routes for each spoke must have unique ASNs and the sites must not have overlapping IP ranges. Each site can also send and receive data from the VPC as if they were using a standard VPN connection.
Sites that use AWS Direct Connect connections to the virtual private gateway can also be part of the AWS VPN CloudHub. For example, your corporate headquarters in New York can have an AWS Direct Connect connection to the VPC and your branch offices can use VPN connections to the VPC. The branch offices in Los Angeles and Miami can send and receive data with each other and with your corporate headquarters, all using the AWS VPN CloudHub.
To configure the AWS VPN CloudHub, you use the AWS Management Console to create multiple customer gateways, each with the unique public IP address of the gateway and a unique ASN. Next, you create a VPN connection from each customer gateway to a common virtual private gateway. Each VPN connection must advertise its specific BGP routes. This is done using the network statements in the VPN configuration files for the VPN connection. The network statements differ slightly depending on the type of router you use.
When using an AWS VPN CloudHub, you pay typical Amazon VPC VPN connection rates. You are billed the connection rate for each hour that each VPN is connected to the virtual private gateway. When you send data from one site to another using the AWS VPN CloudHub, there is no cost to send data from your site to the virtual private gateway. You only pay standard AWS data transfer rates for data that is relayed from the virtual private gateway to your endpoint. For example, if you have a site in Los Angeles and a second site in New York and both sites have a VPN connection to the virtual private gateway, you pay $.05 per hour for each VPN connection (for a total of $.10 per hour). You also pay the standard AWS data transfer rates for all data that you send from Los Angeles to New York (and vice versa) that traverses each VPN connection; network traffic sent over the VPN connection to the virtual private gateway is free but network traffic sent over the VPN connection from the virtual private gateway to the endpoint is billed at the standard AWS data transfer rate.
Question : Your company hosts an on-premises legacy engineering application with GB of data shared via a central file server. The engineering data consists of thousands of individual files ranging in size from megabytes to multiple gigabytes. Engineers typically modify 5-10 percent of the files a day. Your CTO would like to migrate this application to AWS, but only if the application can be migrated over the weekend to minimize user downtime. You calculate that it will take a minimum of 48 hours to transfer 900GB of data using your company's existing 45-Mbps Internet connection. After replicating the application's environment in AWS, which option will allow you to move the application's data to AWS without losing any data and within the given timeframe?
1. Copy the data to Amazon S3 using multiple threads and multi-part upload for large files over the weekend, and work in parallel with your developers to reconfigure the replicated application environment to leverage Amazon S3 to serve the engineering files. 2. Sync the application data to Amazon S3 starting a week before the migration, on Friday morning perform a final sync, and copy the entire data set to your AWS file server after the sync completes. 3. Access Mostly Uused Products by 50000+ Subscribers EBS volume, mount the resulting EBS volume to your AWS file server on Sunday. 4. Leverage the AWS Storage Gateway to create a Gateway-Stored volume. On Friday copy the application data to the Storage Gateway volume. After the data has been copied, perform a snapshot of the volume and restore the volume as an EBS volume to be attached to your AWS file server on Sunday.
Correct Answer : Get Lastest Questions and Answer : Explanation: The situation is network-bounded. A - no matter how many threads are created. It's still limited by network. I would go for B because only 5% files are modified a day. 0.957 is around 70%. Most of the files stay the same and can be copied to S3 before migration. At final sync, compare the checksum and copy only the different ones. The same concept is like Dropbox. You can drag 1-TB into Dropbox a week before. C - I don't think sending sensitive data over USB drive and trust the postal service is a good idea.
1. Continually send transaction logs from your master database to an S3 bucket and generate the reports off the S3 bucket using S3 byte range requests. 2. Generate the reports by querying the synchronously replicated standby RDS MySQL instance maintained through Multi-AZ. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Generate the reports by querying the ElasliCache database caching tier.