Premium

AWS Certified Solutions Architect - Professional Questions and Answers (Dumps and Practice Questions)



Question : You've been hired to enhance the overall security posture for a very large e-commerce site. They have a well architected multi-tier application running in a VPC that
uses ELBs in front of both the web and the app tier with static assets served directly from S3. They are using a combination of RDS and DynamoOB for their dynamic data and then
archiving nightly into S3 for further processing with EMR. They are concerned because they found questionable log entries and suspect someone is attempting to gain unauthorized
access. Which approach provides a cost effective scalable mitigation to this kind of attack?
  : You've been hired to enhance the overall security posture for a very large e-commerce site. They have a well architected multi-tier application running in a VPC that
1. Recommend that they lease space at a DirectConnect partner location and establish a 1G DirectConnect connection to their VPC. They would then establish Internet
connectivity into their space, filter the traffic in hardware Web Application Firewall (WAF). And then pass the traffic through the DirectConnect connection into their application
running in their VPC.
2. Add previously identified hostile source IPs as an explicit INBOUND DENY NACL to the web tier subnet.
3. Access Mostly Uused Products by 50000+ Subscribers
ELB. The WAF tier would pass the traffic to the current web tier. The web tier Security Groups would be updated to only allow traffic from the WAF tier Security Group
4. Remove all but TLS 1.2 from the web tier ELB and enable Advanced Protocol Filtering. This will enable the ELB itself to perform WAF(Web Application Firewall) functionality


Correct Answer : Get Lastest Questions and Answer :

Explanation: Direct Connect is a costly solution and security is not the only reason for it. Hence, option 1 is out.
In option 2 we are talking about blocking the hostile IPS, there is possibilities new ips can be used for the same attack. Hence, option 2 should be out.

AWS WAF will allow you to protect your AWS-powered web applications from application-layer attacks. You simply create one or more web Access Control Lists (web ACLs), each
containing rules (set of conditions defining acceptable or unacceptable requests/IP addresses) and actions to take when a rule is satisfied. Then you attach the web ACL to your
application's Amazon CloudFront distribution. From that point forward, incoming HTTP and HTTPS requests that arrive via the distribution will be checked against each rule in the
associated web ACL. The conditions with the rules can be positive (allow certain requests or IP addresses) or negative (block certain requests or IP addresses).

I can use the rules and the conditions in many different ways. For example, I could create a rule that would block all access from the IP address shown above. If I were getting
similar requests from many different IP addresses, I could choose to block on one or more strings in the URI such as "/typo3/" or "/xampp/." I could also choose to create rules that
would allow access to the actual functioning URIs within my application, and block all others. I can also create rules that guard against various forms of SQL injection.






Question : Your company has recently extended its datacenter into a VPC on AWS to add burst computing capacity as needed. Members of your Network Operations Center need to be
able to go to the AWS Management Console and administer Amazon EC2 instances as necessary. You don't want to create new IAM users for each NOC member and make those users sign in
again to the AWS Management Console. Which option below will meet the needs for your NOC members?
  : Your company has recently extended its datacenter into a VPC on AWS to add burst computing capacity as needed. Members of your Network Operations Center need to be
1. Use OAuth 2.0 to retrieve temporary AWS security credentials to enable your NOC members to sign in to the AWS Management Console.
2. Use web Identity Federation to retrieve AWS temporary security credentials to enable your NOC members to sign in to the AWS Management Console.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Use your on-premises SAML2.0-compliam identity provider (IDP) to retrieve temporary security credentials to enable NOC members to sign in to the AWS Management Console.

Correct Answer : Get Lastest Questions and Answer :
Explanation: You can use a role to configure your SAML 2.0-compliant IdP and AWS to permit your federated users to access the AWS Management Console. The role grants the
user permissions to carry out tasks in the console. If instead you want to give SAML federated users other ways to access AWS.

This specific use of SAML differs from the more general one illustrated at About SAML 2.0-based Federation because this workflow opens the AWS Management Console on behalf of the
user. This requires the use of the AWS SSO endpoint instead of directly calling the AssumeRoleWithSAML API. The endpoint calls the API for the user and returns a URL that
automatically redirects the user's browser to the AWS Management Console

IAM federation supports these use cases:

Web-based single sign-on (WebSSO) to the AWS Management Console from your organization. Users can sign in to a portal in your organization, select an option to go to AWS, and be
redirected to the console without having to provide additional sign-in information. For more information, see Enabling SAML 2.0 Federated Users to Access the AWS Management Console
and Creating a URL that Enables Federated Users to Access the AWS Management Console (Custom Federation Broker).
Federated access to allow a user or application in your organization to call AWS APIs. You use a SAML assertion (as part of the authentication response) that is generated in your
organization to get temporary security credentials. This scenario is similar to other federation scenarios that IAM supports, like those described in Requesting Temporary Security
Credentials and About Web Identity Federation. However, SAML 2.0-based identity providers in your organization handle many of the details at run time for performing authentication
and authorization checking.

Option 4 is for API call and Option 3 for AWS Console.




Question : Your company previously configured a heavily used, dynamically routed VPN connection between your on-premises data center and AWS. You recently provisioned a
DirectConnect connection and would like to start using the new connection. After configuring DirectConnect settings in the AWS Console, which of the following options will
provide the most seamless transition for your users?
  : Your company previously configured a heavily used, dynamically routed VPN connection between your on-premises data center and AWS. You recently provisioned a
1. Delete your existing VPN connection to avoid routing loops configure your DirectConnect router with the appropriate settings and verify network traffic is leveraging
DirectConnect.
2. Configure your DirectConnect router with a higher BGP priority than your VPN router, verify network traffic is leveraging Directconnect and then delete your existing
VPN connection.
3. Access Mostly Uused Products by 50000+ Subscribers
leveraging DirectConnect and then delete the VPN connection.
4. Configure your DirectConnect router, update your VPC route tables to point to the DirectConnect connection, configure your VPN connection with a higher BGP point. And
verify network traffic is leveraging the DirectConnect connection.

Correct Answer : Get Lastest Questions and Answer : If we delete VPN first then it would not be a smooth transition. Option 1 is out. Option 4 is out VPN should not have higher priority then Direct Connect.
Direct Connect should have higher priority then VPN.





Related Questions


Question : You have an application running on an EC Instance which will allow users to download files from a private S bucket using a pre-assigned URL. Before generating the URL
the application should verify the existence of the file in S3. How should the application use AWS credentials to access the S3 bucket securely?
 : You have an application running on an EC Instance which will allow users to download files from a private S bucket using a pre-assigned URL. Before generating the URL
1. Use the AWS account access Keys the application retrieves the credentials from the source code of the application.
2. Create a IAM user for the application with permissions that allow list access to the S3 bucket. Launch the instance as the IAM user and retrieve the IAM user's
credentials from the EC2 instance user data.
3. Access Mostly Uused Products by 50000+ Subscribers
Instance metadata
4. Create an IAM user for the application with permissions that allow list access to the S3 bucket. The application retrieves the IAM user credentials from a temporary
directory with permissions that allow read access only to the application user.


Question : You would like to create a mirror image of your production environment in another region for disaster recovery purposes. Which of the following AWS resources do not
need to be recreated in the second region? (Choose 2 answers)

A. Route 53 Record Sets
B. IAM Roles
C. Elastic IP Addresses (EIP)
D. EC2 Key Pairs
E. Launch configurations
F. Security Groups


 : You would like to create a mirror image of your production environment in another region for disaster recovery purposes. Which of the following AWS resources do not
1. A,C
2. B,C
3. Access Mostly Uused Products by 50000+ Subscribers
4. E,F
5. D,F



Question : Your application is using an ELB in front of an Auto Scaling group of web/application servers deployed across two AZs and a Multi-AZ RDS Instance for data persistence.
The database CPU is often above 80% usage and 90% of I/O operations on the database are reads. To improve performance you recently added a single-node Memcached ElastiCache Cluster
to cache frequent DB query results. In the next weeks the overall workload is expected to grow by 30%. Do you need to change anything in the architecture to maintain the high
availability or the application with the anticipated additional load' Why?


  : Your application is using an ELB in front of an Auto Scaling group of web/application servers deployed across two AZs and a Multi-AZ RDS Instance for data persistence.
1. Yes. you should deploy two Memcached ElastiCache Clusters in different AZs because the RDS Instance will not Be able to handle the load If the cache node fails.
2. No. if the cache node fails the automated ElastiCache node recovery feature will prevent any availability impact.
3. Access Mostly Uused Products by 50000+ Subscribers
4. No if the cache node fails you can always get the same data from the DB without having any availability impact.




Question : A read only news reporting site with a combined web and application tier and a database tier that receives large and unpredictable traffic demands must be able to
respond to these traffic fluctuations automatically. What AWS services should be used meet these requirements?
  : A read only news reporting site with a combined web and application tier and a database tier that receives large and unpredictable traffic demands must be able to
1. Stateless instances for the web and application tier synchronized using Elasticache Memcached in an autoscaling group monitored with CloudWatch. And RDS with read
replicas
2. Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch and RDS with read replicas
3. Access Mostly Uused Products by 50000+ Subscribers
4. Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch and multi-AZ RDS




Question : A company is running a batch analysis every hour on their main transactional DB. Transactional DB running on an RDS MySQL instance. To populate their central Data
Warehouse running on Redshift. During the execution of the batch their transactional applications are very slow. When the batch completes they need to update the top management
dashboard with the new data . The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is
required. The on-premises system cannot be modified because is managed by another team. How would you optimize this scenario to solve performance issues and automate the process as
much as possible?


  : A company is running a batch analysis every hour on their main transactional DB. Transactional DB running on an RDS MySQL instance. To populate their central Data
1. Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard
2. Replace RDS with Redshift for the batch analysis and SQS to send a message to the on-premises system to update the dashboard
3. Access Mostly Uused Products by 50000+ Subscribers
4. Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.



Question : Your customer is willing to consolidate their log streams (access logs application logs security logs etc.) in one single system. Once consolidated, the customer
wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from
the last 12 hours?

What is the best approach to meet your customer's requirements?


  : Your customer is willing to consolidate their log streams (access logs application logs security logs etc.) in one single system. Once consolidated, the customer
1. Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics.
2. Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs
3. Access Mostly Uused Products by 50000+ Subscribers
4. Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs