Question : You have a web application which is deployed on EC instance and database was created on EBS volume. Day by day your application is becoming popular. Hence, it needs higher data storage as well as higher IOPS. For that what you have done you selected provisioned IOPS EBS volume and also increase the size of overall volume. But you don't see a major improvement in overall performance. What further action you will take to improve the performance?
1. You will further increase the EBS volume size
2. You will further provision more IOPS
3. You have to create new EBS volume with the desired IOPS and Volume and migrate everything from existing EBS volume to new EBS volume
4. You will change the EC2 instance type
Correct Answer : 4 Explanation: You have to remember that just changing the EBS volume or provisioning higher IOPS of EBS volume, is not enough. You should have corresponding EC2 instance type as well. So there are various EBS optimized EC2 instances, you need to always use EBS optimized instance if you are using provisioned IOPS EBS volume to take advantage of higher IOPS.
EBS optimized instances have dedicated throughput between EC2 and Amazon EBS, which provide throughput between 62.5MB/S to 1750MB/S it all depend which instance type you choose.
Question : Suppose you have provisioned an EBS volume with IOPS, then which of the following statement is correct? A. You can have up to 500, writes of each with the size up to 256KB B. You can have up to 250, writes of each with the size up to 256KB C. You can have up to 250, writes of each with the size up to 512KB D. You can have up to 500, writes of each with the size up to 512KB 1. A,B 2. B,C 3. C,D 4. A,C 5. B,D
Correct Answer : 4 Explanation: Provisioned IOPS consider 256KB read or write as one IO. If you write 300KB then it would be considered 2 IO. Hence, in the given option if you have 500 IOPS then for 256KB you can write up to 500 values. If you have size is 512 then you can have 250IOPS (It's per second value)
Question : You are developing a web application which hosted on EC instance. This application allows user to appear online exam and once they appear for exam and successfully clear that exam a certificate will be issued. Applications is designed like that, user logged in to appear for exam and a certificate (pdf) file will be created by application same application, save this file in Amazon S3 bucket, so that user can download it from web application. As soon as user completes the exam a message will be published on the SQS with the user id and URL for certificate file. Application on EC2 instance will read this message and send the email to user with all the details as well as store this data in DynamoDB which is partitioned by unique exam id per attempt. Until user appear for the exam entire session data is saved in ElastiCache. Now this web application is subscribed by one of the big organization which has 250K employees and all the employee has to appear for this exam? At which place in entire AWS resources you see that can cause performance hit?
1. SQS Queue
2. DynamoDB
3. S3 bucket
4. EC2 instance Correct Answer : Exp : Question is given with a scenario having integrated with various AWS services and which particular application can cause performance issue. From the given option you need to select the service which cannot be auto-scaled until you configure to do so.
SQS: It can support elastic load, you don't have to explicitly configure. How much load it can support and how much it cannot. DynamoDB: Again this component you don't have to configure explicitly for scaling. AWS will take care of this. S3 Bucket: Any amount of data is supported. If you see and focus all the above three services are native and their auto scaling is managed by AWS only.
EC2 Service: You have to provision as per your need. As you can see in the given scenario, you have only one instance. You should use Auto-scaling group for scaling EC2 services.
Question : You have 's of text files generated on random time, which are generated by your on premise applications. You wanted to do some processing on these files and this code is already written using Java independent applications, which take input path of the files and process the file and generate file in output location. You already have provisioned AWS services for various activity and you are planning to migrate this file processing to AWS. Hence, as soon as file created you will publish in AWS bucket and your existing Java application can process that file and generate the output on S3 bucket. What solution you will prefer for this requirement in AWS?
1. You will be using AWS EMR to process the files
2. You will provision 5 EC2 servers, so each can process 200 files at a time
3. You will be using AWS Lambda Service
4. You will be using AWS S3 Lifecycle configuration features
5. You will be using AWS Simple workflow service
Correct Answer : 3 Explanation: In the given requirement we conclude that
- There is already existing code written in Java and should not be re-written for AWS service - Files are generated randomly and their timing is not fixed - We need to save the cost as well.
AWS Lambda: This is a service which can use your existing Java code and no need to provision any AWS infrastructure in advance. You will be paying only for what you are using. And if you are not running the codes, then there would not be any charges on it. You need to upload your code and AWS Lambda will take care of like if more resources needed to process your job, the Lambda will take care of scaling, high availability etc. You can setup Lambda such that as soon as new file is uploaded it can be processed. You can use Amazon Lambda when you want to trigger a process based on even like - Change in Amazon S3 bucket - Some update happens on DynamoDB table - Custom events are generated by your Mobile device application - Some events happens using IOT and you want to do some processing
EC2 services: You can setup 5 EC2 instances and use your existing code to process the files. But this is not an efficient solution for given requirement. You don't know, when file will be generated and needs to be processed. If you keep running your 5 EC2 instances all the time then it will be costly. And scaling and maintaining is your headache. Why do you want to do that, you will use Lambda for such requirement.
EMR: Elastic Map Reduce, it is a big data solution. You can run your own MapReduce job on EMR. Again you have to write entirely new code for processing your files. And this is not at all cost efficient solution, rather for small work you are trying to provision a big elephant. Don't dare to do that.
S3 Bucket: S3 buckets are only good for storage. They are not capable of processing files.
Simple Workflow Service: Workflows are good if you want to write a process which is multistep. Here, there is not a need for writing Multi-Step process. We need single step process and code is already written using Java.