Question : You are working with an investment bank, which has recently announced to use Hybrid Cloud, public cloud as an AWS and private cloud as in-house datacenter. Company mandated to login AWS console to use multifactor authentication for each account. Now one of the team started using DynamoDB NoSQL solution, from the application which is installed on, on premise Linux instances. During development and testing they have been using secret keys and access keys, which are stored locally on the same Linux host. One of your security team member had raised concern over storing this keys in text file and using this in this way, and he suggested you need to come up with more secure and safe way for interacting between Linux instance and DynamoDB. Which of the following you should consider is the safest way?
A. Amazon can store keys more secure way. So you will be creating an encrypted EBS volume and store that text file on that encrypted EBS volume. B. You will enable encryption between DynamoDB and application installed on Linux instance, using secure certificates. C. You will encrypt that text file and store it in the same instance, and whenever you need to make a connection with the DynamoDB, you have to decrypt that key. D. You will be using Amazon provided KMS (Key management service) service E. You will be leveraging IAM Role functionality 1. A,B 2. B,C 3. C,D 4. B,E 5. A,D
Correct Answer : 4 Explanation: Ans : B,E
Exp : This question has some latent aspect of security. Question is focusing on access keys and secrete keys. But in the given option, if you have to select more than one answer than you have to check which all options are appropriate for secure data transfer and making connection with the AWS services.
Access Keys: You should always avoid to saves access keys on the same host from which your application runs. It is not at all secure. In the exam they may give you a question with AMI (instead of your datacenter, you launched EC2 instance and deployed your application on it, which will connect with the DynamoDB, with the given use case also you should not store these keys on text file)
Can I store it in S3 Bucket? : What is the point of storing Access keys in S3 bucket? No it is not a secure way.
Why keys at all? : Wherever you see a question regarding credentials and you find in the option that IAM Role is given, then think over it. There is the probability this answer will be correct, as in this question.
KMS: Key management service is for storing SSL certificates and not for access keys and secret keys.
Encryption: Yes, whenever data leaves from one network (from AWS) and reach to another network (host on your data center), you must have encryption enabled so that your data are not corrupted by middleman attack.
Question : QuickTechie.com is a very popular websites for the certification exam preparations and they provide online practice material to prepare for a certifications. Only member of the website can attempt to practice paper, hence you need to create you profile and once you create the profile you can start your practice. However, to maintain the session, like how many questions you have appeared and how many right and wrong etc. are maintained using sessions. Once you close the sessions or finish the question paper history of your exam attempts will be deleted. To maintain this entire history of an attempt of the exam, which of the following services can be used, remember it is scalable website with 4 EC2 nodes in two availability zones?
A. Amazon S3 B. ElastiCache C. DynamoDB D. Amazon Simple workflow service E. Amazon Redshift 1. A,B 2. B,C 3. C,D 4. D,E 5. A,E
Correct Answer : 2 Explanation: In this question you need to understand what exactly the requirement. Website is scalable they are already using 4 EC2 instance. And given options are not related to the scalability etc. It is more about managing user session and until user finishes its exam, session data must be stored somewhere. If you are not using AWS then also you would be using some way to manage the session specific data either using in-memory cache or somewhere in database.
However, in case of AWS you should prefer ElastiCache for caching the session data or if you want to persist it and delete later on, then you can use DynamoDB. DynamoDB is very fast NoSQL database, if you use it properly.
S3: Nope, S3 is more for the object storage like files, images and videos. Hence, eliminate this option. Redshift: It is a Data warehouse solution. So it cannot be used here eliminate this option as well.
SWS: Question requirement is nowhere related to workflow service and eliminate this option as well. Hence, remaining options are well suited.
Question : You are working in AcmeShell Inc. their accounting department submit tax on monthly basis for their employee as well for the services provided by the company to their client and they need to store all these records which are documented and should be protected by deletion and any kind of data loss. Which of the following is best suitable solutions from AWS?
1. You will create an EBS volume and attach it to one of the EC2 instance and install accounting application on it, which will encrypt the document and store on it. You will also replicate same EBS volume in another region using sync process.
2. You should create one copy in EBS volume and another copy in the instance store of EC2 instance.
3. You should use AWS Glacier storage service
4. You can use S3 storage service with versioning enabled
5. You should use DynamoDB where you can stored documents as well
Correct Answer : 4 Explanation: What is the requirement in question : storing document which should be protected from accidental deletion. All options are storage S3: Simple Storage service, it is used to store objects like Documents, Images and video files. It could be a correct option. Is it durable? Answer is yes. Once you store your object in it. It will remain in it, until you delete it explicitly and you can do the faster retrieval as well. Can document accidently deleted from S3 bucket, if user has permission to do so, yes it can be deleted. But if you enable versioning than all the versions of the document will be saved, by default. Hence, this is the most suitable answer for given requirement. Please note that, whatever extra space is taken by the multiple version of documents will attract charges. Versioning is free but storage taken by versioned document will be charged. DynamoDB: Not at all good for Document storage, hence eliminate this. It is a No-SQL storage to store key-vale data for faster retrieval. Glacier: This is also an object storage, but it is used for archival, any data which is very frequently accessed should use this storage. However, you may get confused with this option. But to avoid accidental deletion, enabling versioning on the S3 bucket will give you correct choice. EBS: Elastic block storage, it is a storage. But you generally not use this storage for given requirement. EBS can be used for creating databases on it. Like you want to install your own MySQL service. You will be using EBS. And to access data stored in EBS volume, EBS always has to be attached with an EC2 instance.
Instance store: Anything you will store in instance store, will be deleted as soon as you terminate your instance. Hence, this can never be a correct choice for given requirement.
1. You will provision extra 300GB at the start of every month.
2. Whatever storage you need, you have to provision in advance. Because once you provisioned the storage changing storage size will require migration of data.
3. Aurora DB will take care of this, to increase the 10GB storage need every day.
4. You can configure the feature in Aurora DB to have desired increase in storage every day, by paying extra charges of configuring this capability.