Premium

AWS Certified Solutions Architect - Professional Questions and Answers (Dumps and Practice Questions)



Question : Which of the following tenancy attribute of a VPC help all instances launched in the VPC run as single-tenancy instances

 :  Which of the following tenancy attribute of a VPC help all instances launched in the VPC run as single-tenancy instances
1. default
2. dedicated
3. Access Mostly Uused Products by 50000+ Subscribers
4. None


Correct Answer : Get Lastest Questions and Answer :
Exp: edicated Instances are physically isolated at the host hardware level from instances that aren't dedicated and from instances that belong to other AWS accounts. When you create
a VPC, by default its tenancy attribute is set to default. In such a VPC, you can launch instances with a tenancy value of dedicated so that they run as single-tenancy instances.
Otherwise, by default, they run as shared-tenancy instances. If you set the tenancy attribute of a VPC to dedicated, all instances launched in the VPC run as single-tenancy
instances. For more information, see Dedicated Instances in the Amazon VPC User Guide. For pricing information, see the Amazon EC2 Dedicated Instances product page.

When you create a launch configuration, the default value for the instance placement tenancy is null and the instance tenancy is controlled by the tenancy attribute of the VPC. The
following table summarizes the instance placement tenancy of the Auto Scaling instances launched in a VPC.

Launch Configuration Tenancy VPC Tenancy = default VPC Tenancy = dedicated
not specified : shared-tenancy instance : Dedicated Instance
default : shared-tenancy instance : Dedicated Instance
dedicated : Dedicated Instance : Dedicated Instance




Question : One of the AWS account owners faced a major challenge in June as his account was hacked and the hacker deleted
all the data from his AWS account. This resulted in a major blow to the business. Which of the below mentioned
steps may not help in preventing this action?
 : One of the AWS account owners faced a major challenge in June as his account was hacked and the hacker deleted
1. Take a backup of the critical data to offsite / on premise.
2. Create an AMI and a snapshot of the data at regular intervals as well as keep a copy to separate regions.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Do not share the AWS access and secret access keys with others as well do not store it inside programs, instead use IAM roles.



Correct Answer : Get Lastest Questions and Answer :
Exp: AWS security follows the shared security model where the user is as much responsible as Amazon. If the user wants to have secure access to AWS while hosting applications on EC2,
the first security rule to follow is to enable MFA for all users. This will add an added security layer. In the second step, the user should never give his access or secret access
keys to anyone as well as store inside programs. The better solution is to use IAM roles. For critical data of the organization, the user should keep an offsite/ in premise backup
which will help to recover critical data in case of security breach.
It is recommended to have AWS AMIs and snapshots as well as keep them at other regions so that they will help in the DR scenario. However, in case of a data security breach of the
account they may not be very helpful as hacker can delete that.






Question : QuickTechie.com is hosting a scalable "Polling of the new News" web application using AWS. The organization has configured ELB
and Auto Scaling to make the application scalable. Which of the below mentioned statements is not required to be
followed for ELB when the application is planning to host a web application on VPC?
 : QuickTechie.com is hosting a scalable
1. Configure the security group rules and network ACLs to allow traffic to be routed between the subnets in the VPC.
2. The internet facing ELB should have a route table associated with the internet gateway.
3. Access Mostly Uused Products by 50000+ Subscribers
4. The ELB and all the instances should be in the same subnet.



Correct Answer : Get Lastest Questions and Answer : Exp: To create and use ELB load balancers within a VPC, you have to first configure your VPC environment by creating a VPC, creating one or more subnets, and then
launch your instances in the subnets. Here are some tips on configuring your VPC and subnets for Elastic Load Balancing.

Create your VPC with an Internet gateway in the region where you want to launch your instances and load balancer.

If you are a new customer or if you are using the region you have not previously used, you are likely to get a default VPC, by default. You can either use the default VPC or
create your own.
Create subnets in each Availability Zone in which you want to launch your instances. Depending on your use case, your security and operational requirements, the subnets where you
want to launch your instances can either be a private subnet or a public subnet.
Instances launched in a private subnet cannot communicate with the Internet. If you want your instances in private subnet to have outbound internet access only, place a network
address translation (NAT) instance in a public subnet. A NAT instance enables instances in the private subnet to initiate outbound traffic to the Internet, but prevents them from
receiving inbound traffic.
You can optionally create a separate subnet for your load balancer. Your instances do not need to be in the same subnet that has your load balancer. If you plan to place your
load balancer and your back-end instances in separate subnets, make sure to configure the security group rules and network ACLs to allow traffic to be routed between the subnets
in your VPC. If your rules are not configured correctly, instances in other subnets may not be reachable by the load balancer in a different subnet.
To ensure that your load balancer can scale properly, make sure that the subnet in which you plan to place your load balancer has CIDR block of at least a /27 bitmask (e.g.,
10.0.0.0/27) and also has at least 8 free IP addresses. When you create your load balancer and place it in a subnet, this defines the subnet that traffic must enter to forward
the request to registered instances.
Important: If you are creating an Internet-facing load balancer, make sure to place your load balancer in a public subnet. After you create the public subnet, make sure to
associate the route table of your public subnet with the Internet gateway to enable your load balancer in the subnet to connect with the Internet.
If you are planning to register linked EC2-Classic instances with your load balancer, make sure to enable your VPC for ClassicLink after you create it, and then create your load
balancer in that VPC.

The most common VPC scenarios are documented in the Scenarios for Amazon VPC. Each of these scenarios has a link to a detailed explanation of the scenario. At the end of the section
is a section called Implementing the Scenario that gives you instructions on how to create a VPC for that scenario. You can follow the instructions from the scenario that best suits
your use case to create your VPC environment. Amazon Virtual Private Cloud (Amazon VPC) allows the user to define a virtual networking environment in a private, isolated section of
the Amazon Web Services (AWS) cloud. The user has complete control over the virtual networking environment. Within this virtual private cloud, the user can launch AWS resources, such
as an ELB, and EC2 instances. There are two ELBs available with VPC: internet facing and internal (private) ELB. For the internet facing ELB it is required that the ELB should be in
a public subnet. After the user creates the public subnet, he should ensure to associate the route table of the public subnet with the internet gateway to enable the load balancer in
the subnet to connect with the internet. The ELB and instances can be in a separate subnet. However, to allow communication between the instance and the ELB the user must configure
the security group rules and network ACLs to allow traffic to be routed between the subnets in his VPC.




Related Questions


Question : A -tier e-commerce web application is current deployed on-premises and will be migrated to AWS for greater scalability and elasticity. The web server currently shares
read-only data using a network distributed file system. The app server tier uses a clustering mechanism for discovery and shared session state that depends on IP multicast. The
database tier uses shared-storage clustering to provide database fail over capability, and uses several read slaves for scaling. Data on all servers and the distributed file system
directory is backed up weekly to off-site tapes.

Which AWS storage and database architecture meets the requirements of the application?

  : A -tier e-commerce web application is current deployed on-premises and will be migrated to AWS for greater scalability and elasticity. The web server currently shares
1. Web servers, store read-only data in S3, and copy from S3 to root volume at boot time. App servers share state using a combination of DynamoDB and IP unicast.
Database use RDS with multi-AZ deployment and one or more Read Replicas. Backup web and app servers backed up weekly via AMIS. Database backed up via DB snapshots.
2. Web servers, store-read-only data in S3, and copy from S3 to root volume at boot time. App servers share state using a combination of DynamoDB and IP unicast
Database use RDS with multi-AZ deployment and one or more read replicas. Backup web servers app servers, and database backed up weekly to Glacier using snapshots.
3. Access Mostly Uused Products by 50000+ Subscribers
Database use RDS with multi-AZ deployment. Backup web and app servers backed up weekly via AMIs. Database backed up via DB snapshots.
4. Web servers, store read-only data in an EC2 NFS server, mount to each web server at boot time. App servers share state using a combination of DynamoDB and IP
multicast. Database use RDS with multl-AZ deployment and one or more Read Replicas. Backup web and app servers backed up weekly via AMI. Database backed up via DB snapshots



Question : Your customer wishes to deploy an enterprise application to AWS which will consist of several web servers, several application servers and a small (GB) Oracle
database information is stored, both in the database and the file systems of the various servers. The backup system must support database recovery whole server and whole disk
restores, and individual file restores with a recovery time of no more than two hours. They have chosen to use RDS Oracle as the database. Which backup architecture will meet these
requirements?
  : Your customer wishes to deploy an enterprise application to AWS which will consist of several web servers, several application servers and a small (GB) Oracle
1. Backup RDS using automated daily DB backups. Backup the EC2 instances using AMIs and supplement with file-level backup to S3 using traditional enterprise backup
software to provide file level restore.
2. Backup RDS using a Multi-AZ Deployment Backup the EC2 instances using Amis, and supplement by copying file system data to S3 to provide file level restore.
3. Access Mostly Uused Products by 50000+ Subscribers
traditional enterprise backup software to provide file level restore.
4. Backup RDS database to S3 using Oracle RMAN Backup, the EC2 instances using Amis, and supplement with EBS snapshots for individual volume restore.


Question : You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of
around 100 sensors for 3 months. Each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS. During the pilot, you measured a peak or 10 IOPS on the database,
and you stored an average of 3GB of sensor data per month in the database. The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a
PostgreSQL RDS database with 500GB standard storage. The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan
requires a deployment of at least 1O0K sensors which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over
year Improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup win meet the requirements?

  : You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of
1. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance
2. Ingest data into a DynamoDB table and move old data to a Redshift cluster
3. Access Mostly Uused Products by 50000+ Subscribers
4. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS


Question : Your company produces customer commissioned one-of-a-kind skiing helmets combining high fashion with custom technical enhancements. Customers can show off their
Individuality on the ski slopes and have access to head-up-displays , GPS rear-view cams and any other technical innovation they wish to embed in the helmet.
The current manufacturing process is data rich and complex including assessments to ensure that the custom electronics and materials used to assemble the helmets are to the
highest standards. Assessments are a mixture of human and automated assessments you need to add a new set of assessment to model the failure modes of the custom electronics
using GPUs with CUDA across a cluster of servers with low latency networking. What architecture would allow you to automate the existing process using a hybrid approach and ensure
that the architecture can support the evolution of processes over time?
  : Your company produces customer commissioned one-of-a-kind skiing helmets combining high fashion with custom technical enhancements. Customers can show off their
1. Use AWS Data Pipeline to manage movement of data and meta-data and assessments. Use an auto-scaling group of G2 instances in a placement group.
2. Use Amazon Simple Workflow (SWF) to manages assessments, movement of data and meta-data. Use an auto-scaling group of G2 instances in a placement group.
3. Access Mostly Uused Products by 50000+ Subscribers
Virtualization).
4. Use AWS data Pipeline to manage movement of data and meta-data and assessments use auto-scaling group of C3 with SR-IOV (Single Root I/O virtualization).


Question : You are developing a new mobile application and are considering storing user preferences in AWS. This would provide a more uniform cross-device experience to users
using multiple mobile devices to access the application. The preference data for each user is estimated to be 50KB in size. Additionally 5 million customers are expected to use the
application on a regular basis. The solution needs to be cost-effective, highly available, scalable and secure, how would you design a solution to meet the above requirements?
  : You are developing a new mobile application and are considering storing user preferences in AWS. This would provide a more uniform cross-device experience to users
1. Setup an RDS MySQL instance in 2 availability zones to store the user preference data. Deploy a public facing application on a server in front of the database to
manage security and access credentials
2. Setup a DynamoDB table with an item for each user having the necessary attributes to hold the user preferences. The mobile application will query the user
preferences directly from the DynamoDB table. Utilize STS, Web Identity Federation, and DynamoDB Fine Grained Access Control to authenticate and authorize access.
3. Access Mostly Uused Products by 50000+ Subscribers
preferences from the read replicas. Leverage the MySQL user management and access privilege system to manage security and access credentials.
4. Store the user preference data in S3. Setup a DynamoDB table with an item for each user and an item attribute pointing to the user' S3 object. The mobile
application will retrieve the S3 URL from DynamoDB and then access the S3 object directly utilize STS, Web identity Federation, and S3 ACLs to authenticate and authorize access.


Question : A company is building a voting system for a popular TV show, viewers win watch the performances then visit the show's website to vote for their favorite performer. It
is expected that in a short period of time after the show has finished the site will receive millions of visitors. The visitors will first login to the site using their Amazon.com
credentials and then submit their vote. After the voting is completed the page will display the vote totals. The company needs to build the site such that can handle the rapid
influx of traffic while maintaining good performance but also wants to keep costs to a minimum. Which of the design patterns below should they use?

  : A company is building a voting system for a popular TV show, viewers win watch the performances then visit the show's website to vote for their favorite performer. It
1. Use CloudFront and an Elastic Load balancer in front of an auto-scaled set of web servers, the web servers will first can the Login With Amazon service to
authenticate the user then process the users vote and store the result into a multi-AZ Relational Database Service instance.
2. Use CloudFront and the static website hosting feature of S3 with the Javascript SDK to call the Login With Amazon service to authenticate the user, use IAM Roles
to gain permissions to a DynamoDB table to store the users vote.
3. Access Mostly Uused Products by 50000+ Subscribers
authenticate the user, the web servers will process the users vote and store the result into a DynamoDB table using IAM Roles for EC2 instances to gain permissions to the DynamoDB
table.
4. Use CloudFront and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers will first call the Login with Amazon service to
authenticate the user, the web servers will process the users vote and store the result into an SQS queue using IAM Roles for EC2 Instances to gain permissions to the SQS queue. A
set of application servers will then retrieve the items from the queue and store the result into a DynamoDB table.