Visit Official SkillCertPro Website :-
For a full set of 200+ questions. Go to
https://skillcertpro.com/product/aws-certified-database-specialty-practice-exam-set/
SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
SkillCertPro updates exam questions every 2 weeks.
You will get life time access and life time free updates
SkillCertPro assures 100% pass guarantee in first attempt.
Question 1:
A developer is implementing an IoT application using DynamoDB as the data store for device event data. An application requirement is to automatically purge all event data older than 30 days. What is the optimal option to implement this requirement?
A. Enable TTL on the DynamoDB table and store the expiration timestamp in the TTL attribute in the epoch time format.
B. Implement a Lambda function to perform a query and delete on the table for items with timestamp greater than 30 days. Use CloudWatch events to trigger Lambda function.
C. Create a new DynamoDB table every 30 days. Delete the old DynamoDB table.
D. Enable DynamoDB streams on the table. Implement Lambda function to read events from the stream and delete expired items.
Answer: A
Explanation:
"Option A is CORRECT because Time to Live (TTL) for Amazon DynamoDB is functionality that enables automatic deletion of items after a specified expiration time defined by a timestamp in the TTL attribute.
Option B is incorrect because this is not the optimal solution as it requires implementation and deployment for custom code inside a Lambda function.
Option C is incorrect because it would delete items newer than 30 days.
Option D is incorrect because this does not satisfy the original requirement. It would not purge data in the table.
Reference:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html
Question 2:
A retail organization is developing a data lake solution utilizing Amazon S3 to store a large amount of data. They would like to be able to perform data exploration and discovery activities by running SQL queries on the data. Based on the output of those activities, they would like to produce complex reports accessible to a large number of users via BI applications. What AWS services should be part of their solution (SELECT TWO)?
A. Amazon RedShift Spectrum for the complex reporting
B. Amazon Lambda for the complex reporting
C. AWS Glue for the data discovery activities
D. Amazon Athena for the data discovery activities
E. Amazon QuickSight for the data discovery activities
Answer: A and D
Explanation:
"Option A is CORRECT because Amazon Redshift Spectrum can be used to query data from files in Amazon S3 without having to load the data into Amazon Redshift tables. Redshift Spectrum compute-intensive queries employ massive parallelism to execute very fast against large datasets.
Option B is incorrect because this is not the optimal solution as it requires custom code development and deployment using AWS Lambda.
Option C is incorrect because AWS Glue is an ETL service used to categorize and transform data. It cannot be used for querying data.
Option D is CORRECT because Amazon Athena can be used to perform ad-hoc queries on data in S3 directly using SQL syntax.
Option E is incorrect because Amazon QuickSight is a business analytics service used to build visualizations and business insights reports. It is not used for data exploration activities.
Reference:
https://docs.aws.amazon.com/athena/latest/ug/when-should-i-use-ate.html
https://docs.aws.amazon.com/redshift/latest/dg/c-using-spectrum.html
Question 3:
To increase performance of their database solution, an organization is looking to migrate from their RDS MySQL database instance to an Aurora MySQL Database cluster. What is the optimal solution for performing data migration in this scenario?
A. Use the AWS Database Migration Service (DMS) to migrate the data.
B. Use the MySQL mysqldump utility to copy the data.
C. Create an Aurora Read Replica of the source database. After the migration is complete, promote the Aurora Read Replica to a stand-alone DB cluster.
D. Copy the backup files from the source database to an Amazon S3 bucket. Restore the Aurora MySQL DB cluster from those files.
Answer: C
Explanation:
"Option A is incorrect because using DMS in this scenario is not the most optimal solution from an operational and cost-effective point of view. It requires manual planning and execution of migration steps and additional infrastructure to support DMS.
Option B is incorrect because it is not the most optimal solution. It requires manual steps to perform the data migration and it does not take advantage of native capabilities offered by AWS.
Option C is CORRECT because it is possible to create an Aurora Replica of an existing RDS MySQL database. This then automatically migrates the data from source to target. Once data migration is complete, it is possible to promote the read replica to a stand-alone cluster. This is the recommended approach to migrate data from RDS MySQL database to an Aurora MySQL Cluster.
Option D is incorrect because it is not the most optimal solution. It requires manual steps to perform data export and import. Additionally, it incurs costs associated with S3 storage.
Reference:
Question 4:
A company based in North America wishes to expand their operation to the European regions. They wish to perform some performance testing and UAT on production like data of their DynamoDB-based backend system in the European regions. What is the optimal solution for achieving data migration to enable the team to perform their testing tasks?
A. Perform Point-in-Time Recovery of their current DynamoDB table into the new region.
B. Enable DynamoDB Streams on the current DynamoDB table. Create a new DynamoDB table in the new region. Create a Lambda function to poll the current DynamoDB table stream and deliver batch records from streams to the new DynamoDB table.
C. Create a new DynamoDB table in the new region. Create an AWS Glue job to perform a data export from the current DynamoDB table and data import into the new DynamoDB table.
D. Enable DynamoDB Streams. Add a European region to the current DynamoDB table Global Tables setting.
Answer: A
Explanation:
"Option A is CORRECT because DynamoDB Point-in-Time Restore enables recovery of a DynamoDB table across AWS regions. Further it enables full table restore, as well restore of GSI’s and LSI’s. Restoring a DynamoDB table using Point-in-Time restore consumes no provisioned throughput. Data transfer charges between the regions are the only costs associated with this solution.
Option B is incorrect because this is not the optimal solution. It requires custom code associated with Lambda function. Further, there are additional costs associated with DynamoDB streams and Lambda function executions.
Option C is incorrect because this is not the optimal solution. It requires custom code associated with AWS Glue job. Further, there are additional costs associated with the execution of the Glue job.
Option D is incorrect because this would enable cross-region global table. Thus, all the testing would be performed on the production table and data. This is not the original requirement.
Reference:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/PointInTimeRecovery_Howitworks.html
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/PointInTimeRecovery.Tutorial.html
Question 5:
An Amazon RDS MySQL database instance is failing to reboot. Event logs show an error: “MySQL could not be started due to incompatible parameters”. What actions must be performed to resolve this issue?
A. Use SELECT VARIABLE_NAME, SESSION_VALUE, GLOBAL_VALUE statement to identify system variables that have custom values. Use SET statement to set any modified system variables to their default values.
B. Select the default DB Parameter group in the RDS console. Choose the Reset Parameter Group Action to revert the parameters to their default values.
C. Modify the RDS database instance to use the default DB Parameter Group. Reboot the instance.
D. Compare the RDS database instance DB parameter group to the default parameter. Reset any custom parameters to their default values. Reboot the instance.
Answer: D
Explanation:
"Option A is incorrect because AWS RDS does not allow modification of MySQL system variables directly using the SET statement. Instead DB parameter groups must be used.
Option B is incorrect because you cannot modify the default DB parameter group values.
Option C is incorrect because you cannot modify the RDS instance that’s in an incompatible parameters state. You must reset the values of the DP parameter group currently applied to the RDS instance.
Option D is CORRECT because one (or more) parameters are set to non-default values that are not compatible with the current RDS engine or instance class. To resolve the issue, you must reset the parameters to their default values.
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/rds-incompatible-parameters/"
For a full set of 200+ questions. Go to
https://skillcertpro.com/product/aws-certified-database-specialty-practice-exam-set/
SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
SkillCertPro updates exam questions every 2 weeks.
You will get life time access and life time free updates
SkillCertPro assures 100% pass guarantee in first attempt.
Question 6:
An RDS DBA changes the time zone of a Maria DB RDS Instance by setting the dynamic parameter time_zone in the DB Parameter Group to the local time zone of the application. An application user is still reporting incorrect time zone. What actions should the DBA perform to resolve the issue (choose two)?
A. Ensure that the DB Parameter Group is applied to the RDS instance.
B. Instruct the application developers to update the application code to use the dynamic parameter type for the time_zone parameter.
C. Reboot the RDS DB instance.
D. Instruct the application user to disconnect from the database and start a new session.
E. Use rdsadmin.rdsadmin_util.alter_db_time_zone procedure to update the RDS instance time zone to value set in the DB Parameter Group.
Answer: A and D
Explanation:
"Option A is CORRECT because the custom DB Parameter Group where the time_zone parameter was set must be applied to the RDS instance.
Option B is incorrect because application code change is not necessary when time zone on the database has been changed.
Option C is incorrect because dynamic parameters in DB parameter groups do not require RDS instance reboot.
Option D is CORRECT because the time zone change takes effect on any new sessions to the database. Any open connections to the database will use the session time zone. To resolve the issue, the user must close the current connection and open a new connection.
Option E is incorrect because rdsadmin.rdsadmin_util.alter_db_time_zone procedure is used to set the time zone of an Oracle DB instance.
Reference:
Question 7:
What is NOT the best practice when deploying applications using Elastic Beanstalk?
A. Amazon RDS databases should be included in the Elastic Beanstalk environment as that maintains the same life cycle for all components of the environment.
B. Amazon RDS database should be launched outside of the Elastic Beanstalk environment as that provides more flexibility.
C. Amazon RDS Connection String should be stored in a controlled S3 bucket
D. Amazon RDS Delete Protection should be enabled.
Answer: A
Explanation:
"Option A is CORRECT because it is best to decouple an Amazon RDS instance from an Elastic Beanstalk environment, especially in production environment. Launching an RDS database may be suitable for development or PoC environments, but in general it isn’t ideal as it means that termination of the Elastic Beanstalk environment will result in termination of the database as well.
Option B is incorrect because Amazon RDS database should be launched outside of the Elastic Beanstalk environment. This decouples the life-cycle of the database from the life-cycle of the Elastic Beanstalk environment. This protects the database from deletion when the Elastic Beanstalk environment is terminated. It also allows for connecting multiple environments to the same RDS instance and performing advanced deployment strategies such as blue-green deployments.
Option C is incorrect because storing RDS connection string in an encrypted, secured, and controlled S3 bucket and using Elastic Beanstalk configuration files is a valid method that can be used to securely store and configure this data outside of the application code.
Option D is incorrect because you should protect the RDS databases from accidental deletion by enabling Delete Protection.
Reference:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.RDS.html
https://aws.amazon.com/premiumsupport/knowledge-center/decouple-rds-from-beanstalk/
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/rds-external-credentials.html
Question 8:
An application uses GetItem operation to read data from a DynamoDB table. What strategy can be used to reduce the size of the read operations and increase read efficiency?
A. Use Filter Expression
B. Use Pagination
C. Use Parallel Scan
D. Use Projection Expression
Answer: D
Explanation:
"Option A is incorrect because filter expression can be used with Scan operations to filter the results returned by the scan operation. It cannot be used to make GetItem operations more efficient.
Option B is incorrect because Pagination is used with Scan operations to divide the result set of the Scan operation into pages. It cannot be used to make GetItem operations more efficient.
Option C is incorrect because Parallel Scans allows multi-threaded applications to perform Scan operations quicker. It cannot be used to make GetItem operations more efficient.
Option D is CORRECT because Projection Expressions can be used to limit the attributes returned by the GetItem operation and thus reduce the size of the read operation.
Reference:
Question 9:
A company is migrating their on-premise data warehouse to Amazon Redshift. What methods can be used to establish a private connection from on-premise network to Amazon Redshift (Select TWO)?
A. VPC Peering
B. Site-to-site VPN
C. Direct Connect
D. PrivateLink Interface Endpoint
E. PrivateLink Gateway Endpoint
Answer: B and C
Explanation:
"Option A is incorrect because VPC peering is used to establish connectivity between two Amazon VPC’s.
Option B is CORRECT because Site-to-site VPN can be used to establish a secure and private connection between an on-premise network and Amazon VPC over Internet.
Option C is CORRECT because AWS Direct Connect can be used to establish a secure and private connection between on-premise network and Amazon VPC over a dedicated line.
Option D and E are incorrect because PrivateLink endpoints are used to integrate AWS services to Amazon VPC without the use of Internet Gateway.
Reference:
https://docs.aws.amazon.com/redshift/latest/mgmt/network-isolation.html
Question 10:
A company security team has mandated that user access to Amazon Aurora cluster must be controlled via IAM. Which solution below implements this requirement?
A. Modify the Aurora cluster to enable IAM authentication. Grant rds_iam privilege to the user. Apply IAM policy that allows rds-db:connect action to the user.
B. Modify the Aurora cluster to enable IAM authentication. Create an IAM role with rds-db:connect action to the database. Use AWS STS AssumeRole API.
C. Modify the Aurora cluster to enable IAM authentication. Apply IAM policy that allows rds-db:connect action to the user. Use AWS STS GetSessionToken API.
D. Modify the Aurora cluster to enable IAM authentication. Create an Amazon Cognito User Pool. Create an IAM role with rds-db:connect action to the database. Apply Rule-based mapping to Cognito User Pool to the IAM role.
Answer: A
Explanation:
"Option A is CORRECT because Amazon Aurora supports IAM authentication. In order to utilize this feature, the database cluster must be modified to enable IAM authentication. Then a database user must be created and rds_iam privilege must be granted to the user. Finally, the user must have rds-db:connect IAM permissions in order to connect to the database. This can be granted using IAM policy.
Option B is incorrect because STS is a web-service for generating access tokens and creating temporary access to users via API. It does not enable IAM authentication to RDS and Aurora databases.
Option C is incorrect because STS is a web-service for generating access tokens and creating temporary access to users via API. It does not enable IAM authentication to RDS and Aurora databases.
Option D is incorrect because Cognito is an authentication service for providing access to AWS resources to third party external users, or web and mobile apps. It does not enable and grant IAM authentication to RDS and Aurora databases.
Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAMDBAuth.html
For a full set of 200+ questions. Go to
https://skillcertpro.com/product/aws-certified-database-specialty-practice-exam-set/
SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
SkillCertPro updates exam questions every 2 weeks.
You will get life time access and life time free updates
SkillCertPro assures 100% pass guarantee in first attempt.