Visit Official SkillCertPro Website :-
For a full set of 815 questions. Go to
SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
SkillCertPro updates exam questions every 2 weeks.
You will get life time access and life time free updates
SkillCertPro assures 100% pass guarantee in first attempt.
Question 1:
You are designing a relational data repository on Google Cloud to grow as needed. The data will be transactional consistent and added from any location in the world. You want to monitor and adjust node count for input traffic, which can spike unpredictably. What should you do?
A. Use Cloud Spanner for storage. Monitor storage usage and increase node count if more than 70% utilized.
B. Use Cloud Spanner for storage. Monitor CPU utilization and increase node count if more than 70% utilized for your time span.
C. Use Cloud Bigtable for storage. Monitor data stored and increase node count if more than 70% utilized.
D. Use Cloud Bigtable for storage. Monitor CPU utilization and increase node count if more than 70% utilized for your time span.
Answer: B
Explanation:
B is correct because of the requirement to globally scalable transactionsuse Cloud Spanner. CPU utilization is the recommended metric for scaling, per Google best practices, see linked below.
A is not correct because you should not use storage utilization as a scaling metric.
C, D are not correct because you should not use Cloud Bigtable for this scenario: The data will be transactional consistent and added from any location in the world.
Reference
Cloud Spanner Monitoring Using Stackdriver https://cloud.google.com/spanner/docs/monitoring
Monitoring a Cloud Bigtable Instance https://cloud.google.com/bigtable/docs/monitoring-instance
Question 2:
You are designing an application for use only during business hours. For the minimum viable product release, youd like to use a managed product that automatically scales to zero so you dont incur costs when there is no activity. Which primary compute resource should you choose?
A. Compute Engine
B. Cloud Functions
C. Kubernetes Engine
D. AppEngine flexible environmen
Answer: B
Explanation:
Cloud Functions can help automatically scale as per the demand, with no invocations if no demand.
Google Cloud Functions is a serverless execution environment for building and connecting cloud services. With Cloud Functions you write simple, single-purpose functions that are attached to events emitted from your cloud infrastructure and services. Your function is triggered when an event being watched is fired. Your code executes in a fully managed environment. There is no need to provision any infrastructure or worry about managing any servers.
Cloud Functions removes the work of managing servers, configuring software, updating frameworks, and patching operating systems. The software and infrastructure are fully managed by Google so that you just add code. Furthermore, provisioning of resources happens automatically in response to events. This means that a function can scale from a few invocations a day to many millions of invocations without any work from you.
Options A, C & D are wrong as they need to be configured to scale down and would need warm up time to scale back again as compared to Cloud Functions.
Question 3:
You want to optimize the performance of an accurate, real-time, weather-charting application. The data comes from 50,000 sensors sending 10 readings a second, in the format of a timestamp and sensor reading. Where should you store the data?
A. Google BigQuery
B. Google Cloud SQL
C. Google Cloud Bigtable
D. Google Cloud Storage
Answer: C
Explanation:
C. Google Cloud Bigtable
Google Cloud Bigtable is designed for high-throughput, low-latency workloads, making it ideal for real-time data ingestion and retrieval. It can handle large volumes of data with high write and read speeds, which is perfect for the continuous stream of sensor data in your application.
Incorrect Options:
A. Google BigQuery
While BigQuery is excellent for large-scale data analysis and querying, it is not optimized for real-time data ingestion and low-latency access required for this use case.
B. Google Cloud SQL
Cloud SQL is a managed relational database service. It is not designed to handle the high write throughput and low-latency requirements of real-time sensor data ingestion.
D. Google Cloud Storage
Cloud Storage is suitable for storing large amounts of unstructured data, but it is not optimized for real-time data ingestion and low-latency access.
Question 4:
Which of these tools can you use to copy data from AWS S3 to Cloud Storage? (Select 2 )
A. S3 Storage Transfer Service
B. gsutil
C. Cloud Storage Transfer Service
D. Cloud Storage Console
Answer: B and C
Explanation:
Cloud Storage Transfer Service transfers data from an online data source to a data sink. Your data source can be an Amazon Simple Storage Service (Amazon S3) bucket, an HTTP/HTTPS location, or a Google Cloud Storage bucket. Your data sink (the destination) is always a Google Cloud Storage bucket.
You can use Cloud Storage Transfer Service to:
Back up data to a Google Cloud Storage bucket from other storage providers.
Move data from a Multi-Regional Storage bucket to a Nearline Storage bucket to lower your storage costs.
Reference: https://cloud.google.com/storage/transfer/
Question 5:
Which service offers the ability to create and run virtual machines?
A. Google Virtualization Engine
B. Compute Containers
C. VM Engine
D. Compute Engine
Answer: D
Explanation:
Google Compute Engine delivers virtual machines running in Googles innovative data centers and worldwide fiber network. Compute Engines tooling and workflow support enable scaling from single instances to global, load-balanced cloud computing.
Compute Engines VMs boot quickly, come with persistent disk storage, and deliver consistent performance. Our virtual servers are available in many configurations including predefined sizes or the option to create Custom Machine Types optimized for your specific needs. Flexible pricing and automatic sustained use discounts make Compute Engine the leader in price/performance.
Reference: https://cloud.google.com/compute/
For a full set of 815 questions. Go to
SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
SkillCertPro updates exam questions every 2 weeks.
You will get life time access and life time free updates
SkillCertPro assures 100% pass guarantee in first attempt.
Question 6:
Which statements about application load testing are true? (Select 2 )
A. You should test at 50% more than the maximum load that you expect to encounter.
B. Your load tests should include testing sudden increases in traffic.
C. You should test at the maximum load that you expect to encounter.
D. It is not necessary to test sudden increases in traffic since GCP scales seamlessly.
Answer: A and B
Explanation:
A. You should test at 50% more than the maximum load that you expect to encounter.
Testing beyond the expected maximum load helps identify potential performance bottlenecks and ensures the application can handle unexpected traffic spikes. This aligns with best practices for resilience and autoscaling validation in Google Cloud.
B. Your load tests should include testing sudden increases in traffic.
Google Cloud offers autoscaling, but it's crucial to test how the application and infrastructure respond to sudden traffic spikes. This ensures that autoscaling mechanisms, such as GKE horizontal pod autoscalers or Compute Engine managed instance groups, react appropriately and prevent failures.
❌ Incorrect:
C. You should test at the maximum load that you expect to encounter.
While testing at the expected maximum load is important, best practices recommend testing beyond that limit (e.g., 50% more) to ensure the system can handle unexpected surges.
D. It is not necessary to test sudden increases in traffic since GCP scales seamlessly.
While GCP provides autoscaling, there may be delays in scaling up resources, and cold start issues can affect performance. Testing sudden traffic increases ensures that autoscaling policies and provisioning strategies work as expected.
Question 7:
You are building a new API. You want to minimize the cost of storing and reduce the latency of serving images.
Which architecture should you use?
A.App Engine backed by Cloud Storage
B. Compute Engine backed by Persistent Disk
C. Transfer Appliance backed by Cloud Filestore
D. Cloud Content Delivery Network (CDN) backed by Cloud Storage
Answer: D
Explanation:
D. Cloud Content Delivery Network (CDN) backed by Cloud Storage
This is the optimal solution.
Cloud Storage: Provides cost-effective and scalable object storage for images.
Cloud CDN: Caches the images at edge locations, significantly reducing latency for users worldwide. This also reduces the load on Cloud Storage, further contributing to cost savings.
Incorrect Options:
A. App Engine backed by Cloud Storage
While App Engine can serve images from Cloud Storage, it doesn't inherently minimize latency in the same way that a CDN does. App Engine is primarily for application logic, not optimized for image delivery. Also, App engine adds cost to the solution when compared to just cloud storage and cloud CDN.
B. Compute Engine backed by Persistent Disk
Persistent Disk is block storage, which is less cost-effective for storing large numbers of images compared to Cloud Storage. It also requires managing a Compute Engine instance, adding operational overhead and cost. Also, this solution does not provide a CDN.
C. Transfer Appliance backed by Cloud Filestore
Transfer Appliance is for transferring large datasets into GCP, not for serving images. Cloud Filestore is a fully managed NFS file storage service, which is not suitable for serving images in a low-latency, cost-effective manner. Also, this does not provide a CDN.
Question 8:
Your code is running on Cloud Functions in project A. It is supposed to write an object in a Cloud Storage bucket owned by project B. However, the write call is failing with the error “403 Forbidden“. What should you do to correct the problem?
A. Grant your user account the roles/storage.objectCreator role for the Cloud Storage bucket.
B. Grant your user account the roles/iam.serviceAccountUser role for the service-PROJECTA@gcf-admin-robot.iam.gserviceaccount.com service account.
C. Grant the service-PROJECTA@gcf-admin-robot.iam.gserviceaccount.com service account the roles/storage.objectCreator role for the Cloud Storage bucket.
D. Enable the Cloud Storage API in project B.
Answer: C
Explanation:
Let’s analyze the problem and the proposed solutions:
Problem:
Cloud Functions (running in Project A) needs to write an object to a Cloud Storage bucket (owned by Project B).
The operation is failing with a “403 Forbidden” error, indicating a permissions issue.
Analysis of Solutions:
Grant your user account the roles/storage.objectCreator role for the Cloud Storage bucket:
While this would grant you personal access, it doesn’t solve the problem for the Cloud Function itself. Cloud Functions use service accounts, not your personal user account, to interact with other Google Cloud services.
Grant your user account the roles/iam.serviceAccountUser role for the service-PROJECTA@gcf-admin-robot.iam.gserviceaccount.com service account:
This allows your user account to impersonate the Cloud Function’s service account, but it doesn’t grant the service account itself the necessary permissions to write to the Cloud Storage bucket.
Grant the service-PROJECTA@gcf-admin-robot.iam.gserviceaccount.com service account the roles/storage.objectCreator role for the Cloud Storage bucket:
This is the correct solution. The Cloud Function’s service account needs the roles/storage.objectCreator role on the destination bucket to be able to write objects.
Enable the Cloud Storage API in project B:
The Cloud Storage API is likely already enabled in Project B, as the bucket exists. If it wasn’t enabled, the error would be different. The 403 error clearly indicates a permissions problem, not an API enablement problem.
Therefore, the correct solution is:
Grant the service-PROJECTA@gcf-admin-robot.iam.gserviceaccount.com service account the roles/storage.objectCreator role for the Cloud Storage bucket.
Question 9:
You are porting an existing Apache/MySQL/PHP application stack from a single machine to Google Kubernetes Engine. You need to determine how to containerize the application. Your approach should follow Google recommended best practices for availability. What should you do?
A.Package each component in a separate container.
Implement readiness and liveness probes.
B.Package the application in a single container.
Use a process management tool to manage each component.
C.Package each component in a separate container.
Use a script to orchestrate the launch of the components.
D.Package the application in a single container.
Use a bash script as an entrypoint to the container, and then spawn each component as a background job.
Answer: A
Explanation:
Let’s analyze each option and determine the best approach for containerizing the Apache/MySQL/PHP application stack for GKE, following Google’s best practices for availability:
Package each component in a separate container. Implement readiness and liveness probes.
This is the recommended approach. Packaging each component (Apache, MySQL, PHP) in separate containers aligns with the principle of “one process per container.” Readiness and liveness probes are crucial for GKE to monitor the health of each container and automatically restart them if necessary, enhancing availability.
Package the application in a single container. Use a process management tool to manage each component.
While technically possible, running multiple processes within a single container goes against the “one process per container” best practice. It also makes it harder to manage and scale individual components.
Package each component in a separate container. Use a script to orchestrate the launch of the components.
While packaging each component in a seperate container is good, just using a script to orchestrate the launch is not enough. You need the health checks that readiness and liveness probes provide.
Package the application in a single container. Use a bash script as an entrypoint to the container, and then spawn each component as a background job.
This approach suffers from the same drawbacks as the previous single-container option. It’s not aligned with best practices, and it makes monitoring and scaling more complex.
Therefore, the best approach is:
Package each component in a separate container. Implement readiness and liveness probes.
https://cloud.google.com/architecture/best-practices-for-building-containers
Question 10:
Your application is running on Compute Engine and is showing sustained failures for a small number of requests. You have narrowed the cause down to a single Compute Engine instance, but the instance is unresponsive to SSH.
What should you do next?
A.Reboot the machine.
B. Enable and check the serial port output.
C. Delete the machine and create a new one.
D. Take a snapshot of the disk and attach it to a new machine.
Answer: B
Explanation:
Let’s analyze each option in the context of a single unresponsive Compute Engine instance:
Reboot the machine:
This is a good first step. A reboot can often resolve temporary software or hardware issues that might be causing the unresponsiveness. It’s less destructive than deleting the instance.
Enable and check the serial port output:
This is the most informative step. The serial port output provides valuable debugging information, including boot logs, error messages, and kernel output. It can help you identify the root cause of the issue without resorting to more drastic measures.
Delete the machine and create a new one:
This is a last resort. While it will resolve the immediate issue, it doesn’t help you understand the root cause. It also means you will lose any data that was not persistently stored on a separate disk.
Take a snapshot of the disk and attach it to a new machine:
This is a good option if you need to preserve the data and configuration of the instance for further analysis. However, it doesn’t address the immediate issue of restoring service. It also doesn’t allow you to diagnose the issue on the original VM.
Given that the instance is unresponsive to SSH, and you need to investigate the cause of the sustained failures, the best next step is:
Enable and check the serial port output.
If a reboot does not fix the issue, then taking a snapshot of the disk and attaching it to a new machine is a good next step.
For a full set of 815 questions. Go to
SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
SkillCertPro updates exam questions every 2 weeks.
You will get life time access and life time free updates
SkillCertPro assures 100% pass guarantee in first attempt.