Question : You are tasked with moving a legacy application from a virtual machine running Inside your datacenter to an Amazon VPC. Unfortunately this app requires access to a number of on-premises services and no one who configured the app still works for your company. Even worse there's no documentation for it. What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured?
(Choose 3 answers) A. An AWS Direct Connect link between the VPC and the network housing the internal services. B. An Internet Gateway to allow a VPN connection. C. An Elastic IP address on the VPC instance D. An IP address space that does not conflict with the one on-premises E. Entries in Amazon Route 53 that allow the Instance to resolve its dependencies IP addresses F. A VM Import of the current virtual machine
Explanation: VM Import/Export enables you to easily import virtual machine images from your existing environment to Amazon EC2 instances and export them back to your on-premises environment. This offering allows you to leverage your existing investments in the virtual machines that you have built to meet your IT security, configuration management, and compliance requirements by bringing those virtual machines into Amazon EC2 as ready-to-use instances. You can also export imported instances back to your on-premises virtualization infrastructure, allowing you to deploy workloads across your IT infrastructure. (Hence Statement F is correct)
AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.
AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Using industry standard 802.1q VLANs, this dedicated connection can be partitioned into multiple virtual interfaces. This allows you to use the same connection to access public resources such as objects stored in Amazon S3 using public IP address space, and private resources such as Amazon EC2 instances running within an Amazon Virtual Private Cloud (VPC) using private IP space, while maintaining network separation between the public and private environments. Virtual interfaces can be reconfigured at any time to meet your changing needs.(Hence statement A should be correct)
While designing your Amazon VPC, the CIDR block should be chosen in consideration with the number of IP addresses needed and whether we are going to establish connectivity with our data center. The allowed block size is between a /28 netmask and /16 netmask. Amazon VPC can have contain from 16 to 65536 IP addresses. Currently Amazon VPC once created can't be modified, so it is best to choose the CIDR block which has more IP addresses usually. Also when you design the Amazon VPC architecture to communicate with the on premise/data center ensure your CIDR range used in Amazon VPC does not overlaps or conflicts with the CIDR blocks in your On premise/Data center. Note: If you are using same CIDR blocks while configuring the customer gateway it may conflict. E.g., Your VPC CIDR block is 10.0.0.0/16 and if you have 10.0.25.0/24 subnet in a data center the communication from instances in VPC to data center will not happen since the subnet is the part of the VPC CIDR. In order to avoid these consequences it is good to have the IP ranges in different class. Example., Amazon VPC is in 10.0.0.0/16 and data center is in 172.16.0.0/24 series. (Statement D should be correct)
An Elastic IP address (EIP) is a static IP address designed for dynamic cloud computing. With an EIP, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account. Your EIP is associated with your AWS account, not a particular instance, and it remains associated with your account until you choose to explicitly release it. (We dont need any remapping of instances)
Based on this we have correct option 4
Question : Your system recently experienced down time during the troubleshooting process. You found that a new administrator mistakenly terminated several production EC instances. Which of the following strategies will help prevent a similar situation in the future? The administrator still must be able to: - launch, start stop, and terminate development resources. - launch and start production instances. 1. Create an IAM user, which is not allowed to terminate instances by leveraging production EC2 termination protection. 2. Leverage resource based tagging along with an IAM user, which can prevent specific users from terminating production EC2 resources. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Create an IAM user and apply an IAM role which prevents users from terminating production EC2 instances.
Correct Answer : Get Lastest Questions and Answer : Explanation: Customers have been able to use IAM policies to control which of their users or groups could start, stop, reboot, and terminate instances across all EC2 instances under an account. With this release of EC2-based resource permissions, customers can now strictly control which IAM users or groups can start, stop, reboot, and terminate specific EC2 instances. This ability to assign control of an individual instance to a specific user or group helps organizations implement important security principles like separation of duties (preventing critical functions from being in the hands of one user) and least privilege ( providing each user access only to the minimum resources they need to do their job). For example, you probably don't want to give everyone in your organization permission to terminate business-critical production instances, so now you can assign that privilege to only a few trusted administrators. Below is a four-step process that will show you how to use our new resource-level permissions feature along with IAM policies to help protect specific instances.
Step 1: Categorize your instances : The most flexible way to categorize your instances is by tagging them. For smaller environments, simple descriptive tags like "critical=true" may be sufficient. For larger environments, you may want to organize your instances into more complex schemas using multiple tags. Some examples of tags that could be used to organize your instances include: "stack=prod", "service_tier=1", "app=corporate_website", "layer=db", "department=engineering", "cost_center=153". For information on how to apply tags to your instances, see the Tagging Your Amazon EC2 Resources documentation. 2. Define how authorized users can (or can't) manage specific instances : Because IAM policies are based on least privilege, users will not be able to manage your critical instances unless you give them permission to do so. To create a policy that defines permissions to start, stop, reboot, and terminate specific instances (e.g., the ones with a specific tag that you gave them, like "critical"), To add more security controls around users who manage specific instances, you may want to require the use of multi-factor authentication within a time period (e.g. the last 15 minutes). You can also force users to come from a trusted source IP range when making the start, stop, reboot, or terminate requests. In some contexts, you may optionally choose to explicitly deny a group of users the ability to manage specific instances. Explicit denial policies are not generally required, since IAM is deny-all by default, but the use of an explicit deny policy can provide an additional layer of protection, since the presence of a deny statement will cause the user to be denied the ability to perform an action even if another policy statement would have allowed it. Step 3: Lock down your tags. If you choose to use tags as a basis for setting permissions on instances, you will want to restrict which users have permissions to apply and remove tags. For EC2, you will want to restrict which users have permissions to use the ec2:CreateTags and ec2:DeleteTags actions, so that only these users will be able to change your instance inventory. Note: We will be enabling tag-specific permissions for ec2:CreateTag and ec2:DeleteTags in the future, which will enable you to set permissions on a per-tag basis. Step 4: Attach your policies to IAM users : As with other IAM policies, you can attach any of the policies above to any IAM principal, including individual IAM users, groups, and roles. The policies will only apply to the principals they are attached to, so you will want to perform periodic audits to ensure that the policies have been deployed appropriately.
Enabling Termination Protection for an Instance : By default, you can terminate your instance using the Amazon EC2 console, command line interface, or API. If you want to prevent your instance from being accidentally terminated using Amazon EC2, you can enable termination protection for the instance. The DisableApiTermination attribute controls whether the instance can be terminated using the console, CLI, or API. By default, termination protection is disabled for your instance. You can set the value of this attribute when you launch the instance, while the instance is running, or while the instance is stopped (for Amazon EBS-backed instances).
The DisableApiTermination attribute does not prevent you from terminating an instance by initiating shutdown from the instance (using an operating system command for system shutdown) when the InstanceInitiatedShutdownBehavior attribute is set. For more information, see Changing the Instance Initiated Shutdown Behavior. You can't prevent instances that are part of an Auto Scaling group from terminating using termination protection. However, you can specify which instances should terminate first.
Question : Your fortune company has under taken a TCO (total cost of ownership) analysis evaluating the use of Amazon S versus acquiring more hardware. The outcome was that all employees would be granted access to use Amazon S3 for storage of their personal documents. Which of the following will you need to consider so you can set up a solution that
incorporates single sign-on from your corporate AD or LDAP directory and restricts access for each user to a designated user folder in a bucket? (Choose 3 Answers)
A. Setting up a federation proxy or identity provider B. Using AWS Security Token Service to generate temporary tokens C. Tagging each folder in the bucket D. Configuring IAM role E. Setting up a matching IAM user for every user in your corporate directory that needs access to a folder in the bucket
Here's the basic flow of an AD user who wants to access the AWS Management Console: 1. User signs on to the corporate network with their AD credentials. The sample creates an internal website that hosts a proxy server. A user browses to that website. The site authenticates the user against AD and displays a set of IAM roles that are determined by the users AD group membership. 2. The user selects the desired role and clicks on Sign in to AWS Management Console. Behind the scenes, the proxy calls the AWS Security Token Service (STS) to assume the selected role. The response includes temporary security credentials. Using these credentials, the federation proxy constructs a temporary sign-in URL. 3. Access Mostly Uused Products by 50000+ Subscribers limited to the privileges defined in the role they selected. By default, the console session will expire after one hour and the console will be inaccessible. This helps protect your AWS account in the case where users mistakenly leave their computers unlocked while signed in to the console. When the session expires the user will be redirected to a page that includes a URL to the site that hosts the federation proxy. Clicking the URL will re-direct the user to the site that hosts the federation proxy so they can re-authenticate. (Hence A and B is correct) A question came up about whether you can use this technique for federated users instead of for IAM users, as the examples show. Yes you can, but not exactly the same way. Federated users do not have an entity inside of IAM. Therefore, the aws:username variable is not available when using federated users. However, when you work with federated users (using the AWS STS GetFederationToken or AssumeRole APIs), you're using a proxy server to request temporary security credentials on behalf of the federated user. When you request temporary security credentials you have the option of passing a policy as part of the API request. Therefore, before you call GetFederationToken or AssumeRole, you can create a policy and replace the federated user's name where the aws:username variable is used.
As such there is no folder inside the bucket. Hence A,B and C should be correct. (There is a little confusion between option C and D, please validate the same). Refer : http://blogs.aws.amazon.com/security/post/Tx1P2T3LFXXCNB5/Writing-IAM-policies-Grant-access-to-user-specific-folders-in-an-Amazon-S3-bucke (This Link is available in study notes)