Premium

AWS Certified Solutions Architect - Professional Questions and Answers (Dumps and Practice Questions)



Question : Your company produces customer commissioned one-of-a-kind skiing helmets combining high fashion with custom technical enhancements. Customers can show off their
Individuality on the ski slopes and have access to head-up-displays , GPS rear-view cams and any other technical innovation they wish to embed in the helmet.
The current manufacturing process is data rich and complex including assessments to ensure that the custom electronics and materials used to assemble the helmets are to the
highest standards. Assessments are a mixture of human and automated assessments you need to add a new set of assessment to model the failure modes of the custom electronics
using GPUs with CUDA across a cluster of servers with low latency networking. What architecture would allow you to automate the existing process using a hybrid approach and ensure
that the architecture can support the evolution of processes over time?
  : Your company produces customer commissioned one-of-a-kind skiing helmets combining high fashion with custom technical enhancements. Customers can show off their
1. Use AWS Data Pipeline to manage movement of data and meta-data and assessments. Use an auto-scaling group of G2 instances in a placement group.
2. Use Amazon Simple Workflow (SWF) to manages assessments, movement of data and meta-data. Use an auto-scaling group of G2 instances in a placement group.
3. Access Mostly Uused Products by 50000+ Subscribers
Virtualization).
4. Use AWS data Pipeline to manage movement of data and meta-data and assessments use auto-scaling group of C3 with SR-IOV (Single Root I/O virtualization).

Answer: 2
Explanation: The first part of question is about SWF versus Data Pipeline. Both can automate the movement of data. In the process human assessment is also included, so SWF is good
choice for the same. Amazon SWF enables applications for a range of use cases, including media processing, web application back-ends, business process workflows, and analytics
pipelines, to be designed as a coordination of tasks. Tasks represent invocations of various processing steps in an application which can be performed by executable code, web service
calls, human actions, and scripts. Automating workflows that include long-running human tasks (e.g. approvals, reviews, investigations, etc.) Amazon SWF reliably tracks the status of
processing steps that run up to several days or months. While both services provide execution tracking, retry and exception-handling capabilities, and the ability to run arbitrary
actions, AWS Data Pipeline is specifically designed to facilitate the specific steps that are common across a majority of data-driven workflows - inparticular, executing activities
after their input data meets specific readiness criteria, easily copying data between different data stores, and scheduling chained transforms. This highly specific focus means that
its workflow definitions can be created [with] very rapidly and with no code or programming knowledge. Data Pipeline is service used to transfer data between various services of AWS.
Example you can use DataPipeline to read the log files from your EC2 and periodically move them to S3.

Simple Workflow service is very powerful service. You can write even your workflow logic using it. Example : Most of the ecommerce systems have scalability problems in their order
systems. You can use write code in SWF to make this ordering workflow process itself. Hence option 1 and 4 out.

Instance Type G2 : This family includes G2 instances intended for graphics and general purpose GPU compute applications.
Features: High Frequency Intel Xeon E5-2670 (Sandy Bridge) Processors
High-performance NVIDIA GPUs, each with 1,536 CUDA cores and 4GB of video memory
Each GPU features an on-board hardware video encoder designed to support up to eight real-time HD video streams (720p@30fps) or up to four real-time full HD video streams
(1080p@30fps)
Support for low-latency frame capture and encoding for either the full operating system or select render targets, enabling high-quality interactive streaming experiences





Question : You are developing a new mobile application and are considering storing user preferences in AWS. This would provide a more uniform cross-device experience to users
using multiple mobile devices to access the application. The preference data for each user is estimated to be 50KB in size. Additionally 5 million customers are expected to use the
application on a regular basis. The solution needs to be cost-effective, highly available, scalable and secure, how would you design a solution to meet the above requirements?
  : You are developing a new mobile application and are considering storing user preferences in AWS. This would provide a more uniform cross-device experience to users
1. Setup an RDS MySQL instance in 2 availability zones to store the user preference data. Deploy a public facing application on a server in front of the database to
manage security and access credentials
2. Setup a DynamoDB table with an item for each user having the necessary attributes to hold the user preferences. The mobile application will query the user
preferences directly from the DynamoDB table. Utilize STS, Web Identity Federation, and DynamoDB Fine Grained Access Control to authenticate and authorize access.
3. Access Mostly Uused Products by 50000+ Subscribers
preferences from the read replicas. Leverage the MySQL user management and access privilege system to manage security and access credentials.
4. Store the user preference data in S3. Setup a DynamoDB table with an item for each user and an item attribute pointing to the user' S3 object. The mobile
application will retrieve the S3 URL from DynamoDB and then access the S3 object directly utilize STS, Web identity Federation, and S3 ACLs to authenticate and authorize access.

Answer: 2
Exp: Amazon S3 is optimal for storing numerous classes of information that are relatively static and benefit from its durability, availability, and elasticity features.
As user data can rapidly change , hence S3 is not a good coice. One of S3 anti-pattern is Rapidly changing data. Hence option 4 is not correct. Other option talk about RDS MYSQL and
DynamoDB. Data that must be updated very frequently might be better served by a storage solution with lower read / write latencies, such as Amazon EBS volumes, Amazon RDS or other
relational databases, or Amazon DynamoDB.


Automated scalability : Amazon RDS provides pushbutton scaling. If you need fully automated scaling, Amazon DynamoDB may be a better choice.
Amazon DynamoDB is a fast, fully-managed NoSQL database service that makes it simple and cost-effective to store and retrieve any amount of data, and serve any level of request
traffic. Amazon DynamoDB stores structured data in tables, indexed by primary key, and allows low-latency read and write
access to items ranging from 1 byte up to 64 KB.

Ideal Usage Patterns
Amazon DynamoDB is ideal for existing or new applications that need a flexible NoSQL database with low read and write latencies, and the ability to scale storage and throughput up or
down as needed without code changes or downtime. Common use cases include: mobile apps, gaming, digital ad serving, live voting and audience interaction for live events, sensor
networks, log ingestion, access control for web-based content, metadata storage for Amazon S3 objects, e-commerce shopping carts, and web session management. Many of these use cases
require a highly available and scalable database because downtime or performance degradation has an immediate negative impact on an organization's business.

Hence best suited option is 2







Question : A company is building a voting system for a popular TV show, viewers win watch the performances then visit the show's website to vote for their favorite performer. It
is expected that in a short period of time after the show has finished the site will receive millions of visitors. The visitors will first login to the site using their Amazon.com
credentials and then submit their vote. After the voting is completed the page will display the vote totals. The company needs to build the site such that can handle the rapid
influx of traffic while maintaining good performance but also wants to keep costs to a minimum. Which of the design patterns below should they use?

  : A company is building a voting system for a popular TV show, viewers win watch the performances then visit the show's website to vote for their favorite performer. It
1. Use CloudFront and an Elastic Load balancer in front of an auto-scaled set of web servers, the web servers will first can the Login With Amazon service to
authenticate the user then process the users vote and store the result into a multi-AZ Relational Database Service instance.
2. Use CloudFront and the static website hosting feature of S3 with the Javascript SDK to call the Login With Amazon service to authenticate the user, use IAM Roles
to gain permissions to a DynamoDB table to store the users vote.
3. Access Mostly Uused Products by 50000+ Subscribers
authenticate the user, the web servers will process the users vote and store the result into a DynamoDB table using IAM Roles for EC2 instances to gain permissions to the DynamoDB
table.
4. Use CloudFront and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers will first call the Login with Amazon service to
authenticate the user, the web servers will process the users vote and store the result into an SQS queue using IAM Roles for EC2 Instances to gain permissions to the SQS queue. A
set of application servers will then retrieve the items from the queue and store the result into a DynamoDB table.

Answer: 4
Explanation: In short, if you have mainly Lookup queries (and not Join queries), DynamoDB (and other NoSQL DB) is better. If you need to handle a lot of data, you will be limited
when using MySQL (and other RDBMS). Queues are used to decouple message producers from message consumers. This is one way to architect for scale and reliability.

Let's say you've built a mobile voting app for a popular TV show and 5 to 25 million viewers are all voting at the same time (at the end of each performance). How are you going to
handle that many votes in such a short space of time (say, 15 seconds)? You could build a significant web server tier and database back-end that could handle millions of messages per
second but that would be expensive, you'd have to pre-provision for maximum expected workload, and it would not be resilient (for example to database failure or throttling). If few
people voted then you're overpaying for infrastructure; if voting went crazy then votes could be lost.

A better solution would use some queuing mechanism that decoupled the voting apps from your service where the vote queue was highly scalable so it could happily absorb 10
messages/sec or 10 million messages/sec. Then you would have an application tier pulling messages from that queue as fast as possible to tally the votes.




Related Questions


Question : QuickTechie Inc AWS Consulatant have been asked to design the storage layer for an application. The application requires disk performance of at least , IOPS in
addition, the storage layer
must be able to survive the loss of an individual disk, EC2 instance, or Availability Zone without any data loss. The volume you provide must have a capacity of at least 3 TB. Which
of the following designs will meet these objectives'?
  : QuickTechie Inc AWS Consulatant have been asked to design the storage layer for an application. The application requires disk performance of at least , IOPS in
1. Instantiate an i2 8xlarge instance in us-east-1a Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the instance Provision 3x1 TB EBS
volumes attach them to the instance and configure them as a second RAID 0 volume Configure synchronous, block-level replication from the ephemeral-backed volume to the EBS-backed
volume
2. Instantiate an i2 8xlarge instance in us-east-1a create a raid 0 volume using the four 800GB SSD ephemeral disks provided with the Instance Configure synchronous
block-level replication to an Identically configured Instance in us-east-1b.
3. Access Mostly Uused Products by 50000+ Subscribers
instance.
4. Instantiate a c3 8xlarge instance in us-east-i provision 4x1TB EBS volumes, attach them to the instance, and configure them as a single RAID 5 volume Ensure that
EBS snapshots are performed every 15 minutes.
5. Instantiate a c3 8xlarge Instance in us-east-1 Provision 3x1TB EBS volumes attach them to the instance, and configure them as a single RAID 0 volume Ensure that EBS
snapshots are performed every 15 minutes.


Question : QuickTechie Inc require the ability to analyze a large amount of data, which is stored on Amazon S using Amazon Elastic Map Reduce. You are using the cc x large
Instance type,
whose CPUs are mostly idle during processing. Which of the below would be the most cost efficient way to reduce the runtime of the job?

  : QuickTechie Inc require the ability to analyze a large amount of data, which is stored on Amazon S using Amazon Elastic Map Reduce. You are using the cc x large
1. Create more smaller files on Amazon S3.
2. Add additional cc2 8x large instances by introducing a task group.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Create fewer, larger files on Amazon S3.



Question : Acmeshell Inc running a successful multitier web application on AWS and your marketing department has asked you to add a reporting tier to the application. The reporting
tier will aggregate and publish status reports every 30 minutes from user-generated information that is being stored in your web applications database. You are currently running a
Multi-AZ RDS MySQL instance for the database tier. You also have implemented Elasticache as a database caching layer between the application tier and database tier. Please select the
answer that will allow you to successfully implement the reporting tier with as little impact as possible to your database.


  : Acmeshell Inc running a successful multitier web application on AWS and your marketing department has asked you to add a reporting tier to the application. The reporting
1. Continually send transaction logs from your master database to an S3 bucket and generate the reports off the S3 bucket using S3 byte range requests.
2. Generate the reports by querying the synchronously replicated standby RDS MySQL instance maintained through Multi-AZ.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Generate the reports by querying the ElasliCache database caching tier.


Question : Your firm has uploaded a large amount of aerial image data to S. In the past, in your on premises environment, you used a dedicated group of servers to often process
this data and used Rabbit MQ - An open source messaging system to get job information to the servers. Once processed the data would go to tape and be shipped offsite. Your manager
told you to stay with the current design, and leverage AWS archival storage and messaging services to minimize cost. Which is correct?
  : Your firm has uploaded a large amount of aerial image data to S. In the past, in your on premises environment, you used a dedicated group of servers to often process
1. Use SQS for passing job messages use Cloud Watch alarms to terminate EC2 worker instances when they become idle. Once data is processed, change the storage class of
the S3 objects to Reduced Redundancy Storage.
2. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SQS Once data is processed
3. Access Mostly Uused Products by 50000+ Subscribers
messages in SQS. Once data is processed, change the storage class of the S3 objects to Glacier.
4. Use SNS to pass job messages use Cloud Watch alarms to terminate spot worker instances when they become idle. Once data is processed, change the storage class of the
S3 object to Glacier.


Question : A corporate web application is deployed within an Amazon Virtual Private Cloud (VPC) and is connected to the corporate data center via an iPsec VPN.
The application must authenticate against the on-premises LDAP server. After authentication, each logged-in user can only access an Amazon Simple Storage Space (S3) keyspace
specific to that user. Which two approaches can satisfy these objectives? (Choose 2 answers)

A. Develop an identity broker that authenticates against IAM security Token service to assume a IAM role in order to get temporary AWS security credentials. The application calls
the identity broker to get AWS temporary security credentials with access to the appropriate S3 bucket.
B. The application authenticates against LDAP and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to
assume that IAM role. The application can use the temporary credentials to access the appropriate S3 bucket.
C. Develop an identity broker that authenticates against LDAP and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity
broker to get IAM federated user credentials with access to the appropriate S3 bucket.
D. The application authenticates against LDAP the application then calls the AWS identity and Access Management (IAM) Security service to log in to IAM using the LDAP
credentials the application can use the IAM temporary credentials to access the appropriate S3 bucket.
E. The application authenticates against IAM Security Token Service using the LDAP credentials the application uses those temporary AWS security credentials to access the
appropriate S3 bucket.


 : A corporate web application is deployed within an Amazon Virtual Private Cloud (VPC) and is connected to the corporate data center via an iPsec VPN.
1. A,B
2. B,C
3. Access Mostly Uused Products by 50000+ Subscribers
4. D,E
5. A,E



Question : An organization is measuring the latency of an application every minute and storing data inside a file in the JSON format. The organization wants
to send all latency data to AWS CloudWatch. How can the organization achieve this?
  : An organization is measuring the latency of an application every minute and storing data inside a file in the JSON format. The organization wants
1. The user has to parse the file before uploading data to CloudWatch
2. It is not possible to upload the custom data to CloudWatch
3. Access Mostly Uused Products by 50000+ Subscribers
4. The user can use the CloudWatch Import command to import data from the file to CloudWatch