Premium

AWS Certified Solutions Architect - Professional Questions and Answers (Dumps and Practice Questions)



Question : QuickTechie.com has hosted all the applications in the AWS VPC and his security partner is at a remote place and wants to have access to AWS to view
all the VPC records. How can the organization meet the expectations of the partner without compromising on the security of their AWS infrastructure?
  : QuickTechie.com has hosted all the applications in the AWS VPC and his security partner is at a remote place and wants to have access to AWS to view
1. The organization should not accept the request as sharing the credentials means compromising on security.
2. Create an IAM user who will have read only access to the AWS VPC and share those credentials with the auditor.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Create an IAM role which will have read only access to all EC2 services including VPC and assign that role to the auditor .




Correct Answer : Get Lastest Questions and Answer :
Explanation: Your security credentials identify you to services in AWS and grant you unlimited use of your AWS resources, such as your Amazon VPC resources. You can use
AWS Identity and Access Management (IAM) to allow other users, services, and applications to use your Amazon VPC resources without sharing your security credentials. You can choose
to allow full use or limited use of your resources by granting users permission to use specific Amazon EC2 API actions. Some API actions support resource-level permissions, which
allow you to control the specific resources that users can create or modify.The following policy grants users permission to create and manage your VPC. You might attach this policy
to a group of network administrators. The Action element specifies the API actions related to VPCs, subnets, Internet gateways, customer gateways, virtual private gateways, VPN
connections, route tables, Elastic IP addresses, security groups, network ACLs, and DHCP options sets. The policy also allows the group to run, stop, start, and terminate instances.
It also allows the group to list Amazon EC2 resources. {
"Version": "2012-10-17",
"Statement":[{
"Effect":"Allow",
"Action":["ec2:*Vpc*",
"ec2:*Subnet*",
"ec2:*Gateway*",
"ec2:*Vpn*",
"ec2:*Route*",
"ec2:*Address*",
"ec2:*SecurityGroup*",
"ec2:*NetworkAcl*",
"ec2:*DhcpOptions*",
"ec2:RunInstances",
"ec2:StopInstances",
"ec2:StartInstances",
"ec2:TerminateInstances",
"ec2:Describe*"],
"Resource":"*"
} ] }

The policy uses wildcards to specify all actions for each type of object (for example, *SecurityGroup*). Alternatively, you could list each action explicitly. If you use the
wildcards, be aware that if we add new actions whose names include any of the wildcarded strings in the policy, the policy would automatically grant the group access to those new
actions.




Question : Your corporate headquarters in New York can have an AWS Direct Connect connection to the VPC and your branch offices can use VPN connections to the
VPC. The branch offices in Los Angeles and Miami can send and receive data with each other and with your corporate headquarters, all using the AWS________
  :  Your corporate headquarters in New York can have an AWS Direct Connect connection to the VPC and your branch offices can use VPN connections to the
1. VPN
2. VPN CloudHub
3. Access Mostly Uused Products by 50000+ Subscribers
4. VNC




Correct Answer : Get Lastest Questions and Answer :

Explanation: f you have multiple VPN connections, you can provide secure communication between sites using the AWS VPN CloudHub. The VPN CloudHub operates on a simple hub-and-spoke model
that you can use with or without a VPC. This design is suitable for customers with multiple branch offices and existing Internet connections who'd like to implement a convenient,
potentially low-cost hub-and-spoke model for primary or backup connectivity between these remote offices.To use the AWS VPN CloudHub, you must create a virtual private gateway with
multiple customer gateways, each with unique Border Gateway Protocol (BGP) Autonomous System Numbers (ASNs). Customer gateways advertise the appropriate routes (BGP prefixes) over
their VPN connections. These routing advertisements are received and re-advertised to each BGP peer, enabling each site to send data to and receive data from the other sites. The
routes for each spoke must have unique ASNs and the sites must not have overlapping IP ranges. Each site can also send and receive data from the VPC as if they were using a standard
VPN connection.

Sites that use AWS Direct Connect connections to the virtual private gateway can also be part of the AWS VPN CloudHub. For example, your corporate headquarters in New York can have
an AWS Direct Connect connection to the VPC and your branch offices can use VPN connections to the VPC. The branch offices in Los Angeles and Miami can send and receive data with
each other and with your corporate headquarters, all using the AWS VPN CloudHub.

To configure the AWS VPN CloudHub, you use the AWS Management Console to create multiple customer gateways, each with the unique public IP address of the gateway and a unique ASN.
Next, you create a VPN connection from each customer gateway to a common virtual private gateway. Each VPN connection must advertise its specific BGP routes. This is done using the
network statements in the VPN configuration files for the VPN connection. The network statements differ slightly depending on the type of router you use.

When using an AWS VPN CloudHub, you pay typical Amazon VPC VPN connection rates. You are billed the connection rate for each hour that each VPN is connected to the virtual private
gateway. When you send data from one site to another using the AWS VPN CloudHub, there is no cost to send data from your site to the virtual private gateway. You only pay standard
AWS data transfer rates for data that is relayed from the virtual private gateway to your endpoint. For example, if you have a site in Los Angeles and a second site in New York and
both sites have a VPN connection to the virtual private gateway, you pay $.05 per hour for each VPN connection (for a total of $.10 per hour). You also pay the standard AWS data
transfer rates for all data that you send from Los Angeles to New York (and vice versa) that traverses each VPN connection; network traffic sent over the VPN connection to the
virtual private gateway is free but network traffic sent over the VPN connection from the virtual private gateway to the endpoint is billed at the standard AWS data transfer rate.



Question : Your company hosts an on-premises legacy engineering application with GB of data shared via a central file server. The engineering data consists of
thousands of individual files ranging in size from megabytes to multiple gigabytes. Engineers typically modify 5-10 percent of the files a day. Your CTO would like
to migrate this application to AWS, but only if the application can be migrated over the weekend to minimize user downtime. You calculate that it will take a
minimum of 48 hours to transfer 900GB of data using your company's existing 45-Mbps Internet connection.
After replicating the application's environment in AWS, which option will allow you to move the application's data to AWS without losing any data and within the given timeframe?




 : Your company hosts an on-premises legacy engineering application with GB of data shared via a central file server. The engineering data consists of
1. Copy the data to Amazon S3 using multiple threads and multi-part upload for large files over the weekend, and work in parallel with your developers to reconfigure the
replicated application environment to leverage Amazon S3 to serve the engineering files.
2. Sync the application data to Amazon S3 starting a week before the migration, on Friday morning perform a final sync, and copy the entire data set to your AWS file
server after the sync completes.
3. Access Mostly Uused Products by 50000+ Subscribers
EBS volume, mount the resulting EBS volume to your AWS file server on Sunday.
4. Leverage the AWS Storage Gateway to create a Gateway-Stored volume. On Friday copy the application data to the Storage Gateway volume. After the data has been copied,
perform a snapshot of the volume and restore the volume as an EBS volume to be attached to your AWS file server on Sunday.



Correct Answer : Get Lastest Questions and Answer :
Explanation: The situation is network-bounded. A - no matter how many threads are created. It's still limited by network. I would go for B because only 5% files are
modified a day. 0.957 is around 70%. Most of the files stay the same and can be copied to S3 before migration. At final sync, compare the checksum and copy only the different ones.
The same concept is like Dropbox. You can drag 1-TB into Dropbox a week before. C - I don't think sending sensitive data over USB drive and trust the postal service is a good idea.




Related Questions


Question : QuickTechie Inc AWS Consulatant have been asked to design the storage layer for an application. The application requires disk performance of at least , IOPS in
addition, the storage layer
must be able to survive the loss of an individual disk, EC2 instance, or Availability Zone without any data loss. The volume you provide must have a capacity of at least 3 TB. Which
of the following designs will meet these objectives'?
  : QuickTechie Inc AWS Consulatant have been asked to design the storage layer for an application. The application requires disk performance of at least , IOPS in
1. Instantiate an i2 8xlarge instance in us-east-1a Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the instance Provision 3x1 TB EBS
volumes attach them to the instance and configure them as a second RAID 0 volume Configure synchronous, block-level replication from the ephemeral-backed volume to the EBS-backed
volume
2. Instantiate an i2 8xlarge instance in us-east-1a create a raid 0 volume using the four 800GB SSD ephemeral disks provided with the Instance Configure synchronous
block-level replication to an Identically configured Instance in us-east-1b.
3. Access Mostly Uused Products by 50000+ Subscribers
instance.
4. Instantiate a c3 8xlarge instance in us-east-i provision 4x1TB EBS volumes, attach them to the instance, and configure them as a single RAID 5 volume Ensure that
EBS snapshots are performed every 15 minutes.
5. Instantiate a c3 8xlarge Instance in us-east-1 Provision 3x1TB EBS volumes attach them to the instance, and configure them as a single RAID 0 volume Ensure that EBS
snapshots are performed every 15 minutes.


Question : QuickTechie Inc require the ability to analyze a large amount of data, which is stored on Amazon S using Amazon Elastic Map Reduce. You are using the cc x large
Instance type,
whose CPUs are mostly idle during processing. Which of the below would be the most cost efficient way to reduce the runtime of the job?

  : QuickTechie Inc require the ability to analyze a large amount of data, which is stored on Amazon S using Amazon Elastic Map Reduce. You are using the cc x large
1. Create more smaller files on Amazon S3.
2. Add additional cc2 8x large instances by introducing a task group.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Create fewer, larger files on Amazon S3.



Question : Acmeshell Inc running a successful multitier web application on AWS and your marketing department has asked you to add a reporting tier to the application. The reporting
tier will aggregate and publish status reports every 30 minutes from user-generated information that is being stored in your web applications database. You are currently running a
Multi-AZ RDS MySQL instance for the database tier. You also have implemented Elasticache as a database caching layer between the application tier and database tier. Please select the
answer that will allow you to successfully implement the reporting tier with as little impact as possible to your database.


  : Acmeshell Inc running a successful multitier web application on AWS and your marketing department has asked you to add a reporting tier to the application. The reporting
1. Continually send transaction logs from your master database to an S3 bucket and generate the reports off the S3 bucket using S3 byte range requests.
2. Generate the reports by querying the synchronously replicated standby RDS MySQL instance maintained through Multi-AZ.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Generate the reports by querying the ElasliCache database caching tier.


Question : Your firm has uploaded a large amount of aerial image data to S. In the past, in your on premises environment, you used a dedicated group of servers to often process
this data and used Rabbit MQ - An open source messaging system to get job information to the servers. Once processed the data would go to tape and be shipped offsite. Your manager
told you to stay with the current design, and leverage AWS archival storage and messaging services to minimize cost. Which is correct?
  : Your firm has uploaded a large amount of aerial image data to S. In the past, in your on premises environment, you used a dedicated group of servers to often process
1. Use SQS for passing job messages use Cloud Watch alarms to terminate EC2 worker instances when they become idle. Once data is processed, change the storage class of
the S3 objects to Reduced Redundancy Storage.
2. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SQS Once data is processed
3. Access Mostly Uused Products by 50000+ Subscribers
messages in SQS. Once data is processed, change the storage class of the S3 objects to Glacier.
4. Use SNS to pass job messages use Cloud Watch alarms to terminate spot worker instances when they become idle. Once data is processed, change the storage class of the
S3 object to Glacier.


Question : A corporate web application is deployed within an Amazon Virtual Private Cloud (VPC) and is connected to the corporate data center via an iPsec VPN.
The application must authenticate against the on-premises LDAP server. After authentication, each logged-in user can only access an Amazon Simple Storage Space (S3) keyspace
specific to that user. Which two approaches can satisfy these objectives? (Choose 2 answers)

A. Develop an identity broker that authenticates against IAM security Token service to assume a IAM role in order to get temporary AWS security credentials. The application calls
the identity broker to get AWS temporary security credentials with access to the appropriate S3 bucket.
B. The application authenticates against LDAP and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to
assume that IAM role. The application can use the temporary credentials to access the appropriate S3 bucket.
C. Develop an identity broker that authenticates against LDAP and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity
broker to get IAM federated user credentials with access to the appropriate S3 bucket.
D. The application authenticates against LDAP the application then calls the AWS identity and Access Management (IAM) Security service to log in to IAM using the LDAP
credentials the application can use the IAM temporary credentials to access the appropriate S3 bucket.
E. The application authenticates against IAM Security Token Service using the LDAP credentials the application uses those temporary AWS security credentials to access the
appropriate S3 bucket.


 : A corporate web application is deployed within an Amazon Virtual Private Cloud (VPC) and is connected to the corporate data center via an iPsec VPN.
1. A,B
2. B,C
3. Access Mostly Uused Products by 50000+ Subscribers
4. D,E
5. A,E



Question : An organization is measuring the latency of an application every minute and storing data inside a file in the JSON format. The organization wants
to send all latency data to AWS CloudWatch. How can the organization achieve this?
  : An organization is measuring the latency of an application every minute and storing data inside a file in the JSON format. The organization wants
1. The user has to parse the file before uploading data to CloudWatch
2. It is not possible to upload the custom data to CloudWatch
3. Access Mostly Uused Products by 50000+ Subscribers
4. The user can use the CloudWatch Import command to import data from the file to CloudWatch