Visit Official SkillCertPro Website :-
For a full set of 1170 questions. Go to
https://skillcertpro.com/product/aws-solutions-architect-associate-saa-c03-practice-tests/
SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
SkillCertPro updates exam questions every 2 weeks.
You will get life time access and life time free updates
SkillCertPro assures 100% pass guarantee in first attempt.
Question 1:
An application running on Amazon EC2 needs to asynchronously invoke an AWS Lambda function to perform data processing. The services should be decoupled.
Which service can be used to decouple the compute services?
A. Amazon MQ
B. AWS Step Functions
C. Amazon SNS
D. AWS Config
Answer: C
Explanation:
You can use a Lambda function to process Amazon Simple Notification Service notifications. Amazon SNS supports Lambda functions as a target for messages sent to a topic. This solution decouples the Amazon EC2 application from Lambda and ensures the Lambda function is invoked.
CORRECT: “Amazon SNS“ is the correct answer.
INCORRECT: “AWS Config“ is incorrect. AWS Config is a service that is used for continuous compliance, not application decoupling.
INCORRECT: “Amazon MQ“ is incorrect. Amazon MQ is similar to SQS but is used for existing applications that are being migrated into AWS. SQS should be used for new applications being created in the cloud.
INCORRECT: “AWS Step Functions“ is incorrect. AWS Step Functions is a workflow service. It is not the best solution for this scenario.
References:
https://docs.aws.amazon.com/lambda/latest/dg/with-sns.html
https://aws.amazon.com/sns/features/
Question 2:
A company plans to make an Amazon EC2 Linux instance unavailable outside of business hours to save costs. The instance is backed by an Amazon EBS volume. There is a requirement that the contents of the instance’s memory must be preserved when it is made unavailable.
How can a solutions architect meet these requirements?
A. Terminate the instance outside business hours. Recover the instance again when required.
B. Hibernate the instance outside business hours. Start the instance again when required.
C. Stop the instance outside business hours. Start the instance again when required.
D. Use Auto Scaling to scale down the instance outside of business hours. Scale up the instance when required.
Answer: B
Explanation:
When you hibernate an instance, Amazon EC2 signals the operating system to perform hibernation (suspend-to-disk). Hibernation saves the contents from the instance memory (RAM) to your Amazon Elastic Block Store (Amazon EBS) root volume. Amazon EC2 persists the instance‘s EBS root volume and any attached EBS data volumes. When you start your instance:
– The EBS root volume is restored to its previous state
– The RAM contents are reloaded
– The processes that were previously running on the instance are resumed
– Previously attached data volumes are reattached and the instance retains its instance ID
CORRECT: “Hibernate the instance outside business hours. Start the instance again when required“ is the correct answer.
INCORRECT: “Stop the instance outside business hours. Start the instance again when required“ is incorrect. When an instance is stopped the operating system is shut down and the contents of memory will be lost.
INCORRECT: “Use Auto Scaling to scale down the instance outside of business hours. Scale out the instance when required“ is incorrect. Auto Scaling scales does not scale up and down, it scales in by terminating instances and out by launching instances. When scaling out new instances are launched and no state will be available from terminated instances.
INCORRECT: “Terminate the instance outside business hours. Recover the instance again when required“ is incorrect. You cannot recover terminated instances, you can recover instances that have become impaired in some circumstances.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html
Question 3:
A company has deployed a new website on Amazon EC2 instances behind an Application Load Balancer (ALB). Amazon Route 53 is used for the DNS service. The company has asked a Solutions Architect to create a backup website with support contact details that users will be directed to automatically if the primary website is down.
How should the Solutions Architect deploy this solution cost-effectively?
A. Deploy the backup website on EC2 and ALB in another Region and use Route 53 health checks for failover routing.
B. Configure a static website using Amazon S3 and create a Route 53 failover routing policy.
C. Configure a static website using Amazon S3 and create a Route 53 weighted routing policy.
D. Create the backup website on EC2 and ALB in another Region and create an AWS Global Accelerator endpoint.
Answer: B
Explanation:
The most cost-effective solution is to create a static website using an Amazon S3 bucket and then use a failover routing policy in Amazon Route 53. With a failover routing policy users will be directed to the main website as long as it is responding to health checks successfully.
If the main website fails to respond to health checks (its down), Route 53 will begin to direct users to the backup website running on the Amazon S3 bucket. It’s important to set the TTL on the Route 53 records appropriately to ensure that users resolve the failover address within a short time.
CORRECT: “Configure a static website using Amazon S3 and create a Route 53 failover routing policy“ is the correct answer.
INCORRECT: “Configure a static website using Amazon S3 and create a Route 53 weighted routing policy“ is incorrect. Weighted routing is used when you want to send a percentage of traffic between multiple endpoints. In this case all traffic should go to the primary until if fails, then all should go to the backup.
INCORRECT: “Deploy the backup website on EC2 and ALB in another Region and use Route 53 health checks for failover routing“ is incorrect. This is not a cost-effective solution for the backup website. It can be implemented using Route 53 failover routing which uses health checks but would be an expensive option.
INCORRECT: “Create the backup website on EC2 and ALB in another Region and create an AWS Global Accelerator endpoint“ is incorrect. Global Accelerator is used for performance as it directs traffic to the nearest healthy endpoint. It is not useful for failover in this scenario and is also a very expensive solution.
References:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-configuring.html
Question 4:
An organization want to share regular updates about their charitable work using static webpages. The pages are expected to generate a large amount of views from around the world. The files are stored in an Amazon S3 bucket. A solutions architect has been asked to design an efficient and effective solution.
Which action should the solutions architect take to accomplish this?
A. Use the geoproximity feature of Amazon Route 53
B. Use cross-Region replication to all Regions
C. Generate presigned URLs for the files
D. Use Amazon CloudFront with the S3 bucket as its origin
Answer: D
Explanation:
Amazon CloudFront can be used to cache the files in edge locations around the world and this will improve the performance of the webpages.
To serve a static website hosted on Amazon S3, you can deploy a CloudFront distribution using one of these configurations:
Using a REST API endpoint as the origin with access restricted by an origin access identity (OAI)
Using a website endpoint as the origin with anonymous (public) access allowed
Using a website endpoint as the origin with access restricted by a Referer header
CORRECT: “Use Amazon CloudFront with the S3 bucket as its origin“ is the correct answer.
INCORRECT: “Generate presigned URLs for the files“ is incorrect as this is used to restrict access which is not a requirement.
INCORRECT: “Use cross-Region replication to all Regions“ is incorrect as this does not provide a mechanism for directing users to the closest copy of the static webpages.
INCORRECT: “Use the geoproximity feature of Amazon Route 53“ is incorrect as this does not include a solution for having multiple copies of the data in different geographic lcoations.
References:
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/
Question 5:
A solutions architect needs to backup some application log files from an online ecommerce store to Amazon S3. It is unknown how often the logs will be accessed or which logs will be accessed the most. The solutions architect must keep costs as low as possible by using the appropriate S3 storage class.
Which S3 storage class should be implemented to meet these requirements?
A. S3 Glacier
B. S3 Standard-Infrequent Access (S3 Standard-IA)
C. S3 Intelligent-Tiering
D. S3 One Zone-Infrequent Access (S3 One Zone-IA)
Answer: C
Explanation:
The S3 Intelligent-Tiering storage class is designed to optimize costs by automatically moving data to the most cost-effective access tier, without performance impact or operational overhead.
It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access. This is an ideal use case for intelligent-tiering as the access patterns for the log files are not known.
CORRECT: “S3 Intelligent-Tiering“ is the correct answer.
INCORRECT: “S3 Standard-Infrequent Access (S3 Standard-IA)“ is incorrect as if the data is accessed often retrieval fees could become expensive.
INCORRECT: “S3 One Zone-Infrequent Access (S3 One Zone-IA)“ is incorrect as if the data is accessed often retrieval fees could become expensive.
INCORRECT: “S3 Glacier“ is incorrect as if the data is accessed often retrieval fees could become expensive. Glacier also requires more work in retrieving the data from the archive and quick access requirements can add further costs.
References:
https://aws.amazon.com/s3/storage-classes/#Unknown_or_changing_access
For a full set of 1170 questions. Go to
https://skillcertpro.com/product/aws-solutions-architect-associate-saa-c03-practice-tests/
SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
SkillCertPro updates exam questions every 2 weeks.
You will get life time access and life time free updates
SkillCertPro assures 100% pass guarantee in first attempt.
Question 6:
An eCommerce application consists of three tiers. The web tier includes EC2 instances behind an Application Load balancer, the middle tier uses EC2 instances and an Amazon SQS queue to process orders, and the database tier consists of an Auto Scaling DynamoDB table. During busy periods customers have complained about delays in the processing of orders. A Solutions Architect has been tasked with reducing processing times.
Which action will be MOST effective in accomplishing this requirement?
A. Replace the Amazon SQS queue with Amazon Kinesis Data Firehose.
B. Add an Amazon CloudFront distribution with a custom origin to cache the responses for the web tier.
C. Use Amazon EC2 Auto Scaling to scale out the middle tier instances based on the SQS queue depth.
D. Use Amazon DynamoDB Accelerator (DAX) in front of the DynamoDB backend tier.
Answer: C
Explanation:
The most likely cause of the processing delays is insufficient instances in the middle tier where the order processing takes place. The most effective solution to reduce processing times in this case is to scale based on the backlog per instance (number of messages in the SQS queue) as this reflects the amount of work that needs to be done.
CORRECT: “Use Amazon EC2 Auto Scaling to scale out the middle tier instances based on the SQS queue depth“ is the correct answer.
INCORRECT: “Replace the Amazon SQS queue with Amazon Kinesis Data Firehose“ is incorrect. The issue is not the efficiency of queuing messages but the processing of the messages. In this case scaling the EC2 instances to reflect the workload is a better solution.
INCORRECT: “Use Amazon DynamoDB Accelerator (DAX) in front of the DynamoDB backend tier“ is incorrect. The DynamoDB table is configured with Auto Scaling so this is not likely to be the bottleneck in order processing.
INCORRECT: “Add an Amazon CloudFront distribution with a custom origin to cache the responses for the web tier“ is incorrect. This will cache media files to speed up web response times but not order processing times as they take place in the middle tier.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html
Question 7:
A company runs an application in a factory that has a small rack of physical compute resources. The application stores data on a network attached storage (NAS) device using the NFS protocol. The company requires a daily offsite backup of the application data.
Which solution can a Solutions Architect recommend to meet this requirement?
A. Create an IPSec VPN to AWS and configure the application to mount the Amazon EFS file system. Run a copy job to backup the data to EFS.
B. Use an AWS Storage Gateway volume gateway with cached volumes on premises to replicate the data to Amazon S3.
C. Use an AWS Storage Gateway file gateway hardware appliance on premises to replicate the data to Amazon S3.
D. Use an AWS Storage Gateway volume gateway with stored volumes on premises to replicate the data to Amazon S3.
Answer: C
Explanation:
The AWS Storage Gateway Hardware Appliance is a physical, standalone, validated server configuration for on-premises deployments. It comes pre-loaded with Storage Gateway software, and provides all the required CPU, memory, network, and SSD cache resources for creating and configuring File Gateway, Volume Gateway, or Tape Gateway.
A file gateway is the correct type of appliance to use for this use case as it is suitable for mounting via the NFS and SMB protocols.
CORRECT: “Use an AWS Storage Gateway file gateway hardware appliance on premises to replicate the data to Amazon S3“ is the correct answer.
INCORRECT: “Use an AWS Storage Gateway volume gateway with stored volumes on premises to replicate the data to Amazon S3“ is incorrect. Volume gateways are used for block-based storage and this solution requires NFS (file-based storage).
INCORRECT: “Use an AWS Storage Gateway volume gateway with cached volumes on premises to replicate the data to Amazon S3“ is incorrect. Volume gateways are used for block-based storage and this solution requires NFS (file-based storage).
INCORRECT: “Create an IPSec VPN to AWS and configure the application to mount the Amazon EFS file system. Run a copy job to backup the data to EFS“ is incorrect. It would be better to use a Storage Gateway which will automatically take care of synchronizing a copy of the data to AWS.
References:
https://aws.amazon.com/storagegateway/hardware-appliance/
Question 8:
A company provides a REST-based interface to an application that allows a partner company to send data in near-real time. The application then processes the data that is received and stores it for later analysis. The application runs on Amazon EC2 instances.
The partner company has received many 503 Service Unavailable Errors when sending data to the application and the compute capacity reaches its limits and is unable to process requests when spikes in data volume occur.
Which design should a Solutions Architect implement to improve scalability?
A. Use Amazon SQS to ingest the data. Configure the EC2 instances to process messages from the SQS queue.
B. Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS Lambda functions.
C. Use Amazon SNS to ingest the data and trigger AWS Lambda functions to process the data in near-real time.
D. Use Amazon API Gateway in front of the existing application. Create a usage plan with a quota limit for the partner company.
Answer: B
Explanation:
Amazon Kinesis enables you to ingest, buffer, and process streaming data in real-time. Kinesis can handle any amount of streaming data and process data from hundreds of thousands of sources with very low latencies. This is an ideal solution for data ingestion.
To ensure the compute layer can scale to process increasing workloads, the EC2 instances should be replaced by AWS Lambda functions. Lambda can scale seamlessly by running multiple executions in parallel.
CORRECT: “Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS Lambda functions“ is the correct answer.
INCORRECT: “Use Amazon API Gateway in front of the existing application. Create a usage plan with a quota limit for the partner company“ is incorrect. A usage plan will limit the amount of data that is received and cause more errors to be received by the partner company.
INCORRECT: “Use Amazon SQS to ingest the data. Configure the EC2 instances to process messages from the SQS queue“ is incorrect. Amazon Kinesis Data Streams should be used for near-real time or real-time use cases instead of Amazon SQS.
INCORRECT: “Use Amazon SNS to ingest the data and trigger AWS Lambda functions to process the data in near-real time“ is incorrect. SNS is not a near-real time solution for data ingestion. SNS is used for sending notifications.
References:
https://aws.amazon.com/kinesis/
https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html
Question 9:
A solutions architect is designing the infrastructure to run an application on Amazon EC2 instances. The application requires high availability and must dynamically scale based on demand to be cost efficient.
What should the solutions architect do to meet these requirements?
A. Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Availability Zones
B. Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Regions
C. Configure an Amazon API Gateway API in front of an Auto Scaling group to deploy instances to multiple Availability Zones
D. Configure an Amazon CloudFront distribution in front of an Auto Scaling group to deploy instances to multiple Regions
Answer: A
Explanation:
The Amazon EC2-based application must be highly available and elastically scalable. Auto Scaling can provide the elasticity by dynamically launching and terminating instances based on demand. This can take place across availability zones for high availability.
Incoming connections can be distributed to the instances by using an Application Load Balancer (ALB).
CORRECT: “Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Availability Zones“ is the correct answer.
INCORRECT: “Configure an Amazon API Gateway API in front of an Auto Scaling group to deploy instances to multiple Availability Zones“ is incorrect as API gateway is not used for load balancing connections to Amazon EC2 instances.
INCORRECT: “Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Regions“ is incorrect as you cannot launch instances in multiple Regions from a single Auto Scaling group.
INCORRECT: “Configure an Amazon CloudFront distribution in front of an Auto Scaling group to deploy instances to multiple Regions“ is incorrect as you cannot launch instances in multiple Regions from a single Auto Scaling group.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html
https://aws.amazon.com/elasticloadbalancing/
Question 10:
A company has uploaded some highly critical data to an Amazon S3 bucket. Management are concerned about data availability and require that steps are taken to protect the data from accidental deletion. The data should still be accessible, and a user should be able to delete the data intentionally.
Which combination of steps should a solutions architect take to accomplish this? (Select TWO.)
A. Enable MFA Delete on the S3 bucket.
B. Create a lifecycle policy for the objects in the S3 bucket.
C. Enable default encryption on the S3 bucket.
D. Enable versioning on the S3 bucket.
E. Create a bucket policy on the S3 bucket.
Answer: A and D
Explanation:
Multi-factor authentication (MFA) delete adds an additional step before an object can be deleted from a versioning-enabled bucket.
With MFA delete the bucket owner must include the x-amz-mfa request header in requests to permanently delete an object version or change the versioning state of the bucket.
CORRECT: “Enable versioning on the S3 bucket“ is a correct answer.
CORRECT: “Enable MFA Delete on the S3 bucket“ is also a correct answer.
INCORRECT: “Create a bucket policy on the S3 bucket“ is incorrect. A bucket policy is not required to enable MFA delete.
INCORRECT: “Enable default encryption on the S3 bucket“ is incorrect. Encryption does not protect against deletion.
INCORRECT: “Create a lifecycle policy for the objects in the S3 bucket“ is incorrect. A lifecycle policy will move data to another storage class but does not protect against deletion.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html
For a full set of 1170 questions. Go to
https://skillcertpro.com/product/aws-solutions-architect-associate-saa-c03-practice-tests/
SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
SkillCertPro updates exam questions every 2 weeks.
You will get life time access and life time free updates
SkillCertPro assures 100% pass guarantee in first attempt.