Visit Official SkillCertPro Website :-
For a full set of 1564 questions. Go to
https://skillcertpro.com/product/snowflake-snowpro-core-exam-questions/
SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
SkillCertPro updates exam questions every 2 weeks.
You will get life time access and life time free updates
SkillCertPro assures 100% pass guarantee in first attempt.
Question 1:
Which statement accurately describes how a virtual warehouse functions?
A. Each virtual warehouse is a compute cluster composed of multiple compute nodes allocated by Snowflake from a cloud provider.
B. Increasing the size of a virtual warehouse will always improve data loading performance.
C. Each virtual warehouse is an independent compute cluster that shares compute resources with other warehouses.
D. All virtual warehouses share the same compute resources so performance degradation of one warehouse can significantly affect all the other warehouses.
Answer: A
Explanation:
Query Processing
Query execution is performed in the processing layer. Snowflake processes queries using “virtual warehouses”. Each virtual warehouse is an MPP compute cluster composed of multiple compute nodes allocated by Snowflake from a cloud provider.
Question 2:
If auto-suspend is enabled for a Virtual Warehouse, the Warehouse is automatically suspended when:
A. There are no users logged into Snowflake.
B. The Warehouse is inactive for a specified period of time.
C. The last query using the Warehouse completes.
D. All Snowflakes sessions using the Warehouse are terminated.
Answer: B
Explanation:
The correct answer is: The Warehouse is inactive for a specified period of time.
When auto-suspend is enabled for a Virtual Warehouse, Snowflake automatically suspends the warehouse after a certain period of inactivity. This helps optimize resource utilization and reduce costs by avoiding unnecessary compute resource consumption.
Question 3:
Which statistic displayed in a Query Profile is specific to external functions?
A. Total invocations
B. Bytes sent over the network
C. Partitions scanned
D. Bytes written
Answer: A
Explanation:
Total invocations — number of times that an external function was called. (This can be different from the number of external function calls in the text of the SQL statement due to the number of batches that rows are divided into, the number of retries (if there are transient network problems), etc.)
Question 4:
How does Snowflake store a table‘s underlying data? (Choose two.)
A. Columnar file format
B. Text file format
C. User-defined partitions
D. Micro-partitions
E. Uncompressed
Answer: A and D
Explanation:
All data in Snowflake tables is automatically divided into micro-partitions, which are contiguous units of storage. Each micro-partition contains between 50 MB and 500 MB of uncompressed data (note that the actual size in Snowflake is smaller because data is always stored compressed). Groups of rows in tables are mapped into individual micro-partitions, organized in a columnar fashion.
Question 5:
What is the MOST performant file format for loading data in Snowflake?
A. CSV (Gzipped)
B. ORC
C. Parquet
D. CSV (Unzipped)
Answer: A
Explanation:
Loading from Gzipped CSV is several times faster than loading from ORC and Parquet at an impressive 15 TB/Hour. While 5-6 TB/hour is decent if your data is originally in ORC or Parquet, don’t go out of your way to CREATE ORC or Parquet files from CSV in the hope that it will load Snowflake faster.
Loading data into fully structured (columnarized) schema is ~10-20% faster than landing it into a VARIANT.
For a full set of 1564 questions. Go to
https://skillcertpro.com/product/snowflake-snowpro-core-exam-questions/
SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
SkillCertPro updates exam questions every 2 weeks.
You will get life time access and life time free updates
SkillCertPro assures 100% pass guarantee in first attempt.
Question 6:
While cloning the entire database, the Internal (Snowflake) stages inside that database are also cloned
A. TRUE
B. FALSE
Answer: B
Explanation:
The following rules apply to cloning stages or objects that contain stages (i.e. databases and schemas):
Individual external named stages can be cloned; internal named stages cannot be cloned.
When cloning a database or schema:
External named stages that were present in the source when the cloning operation started are cloned.
Tables are cloned, which means their internal stages are also cloned.
Internal named stages are not cloned.
Regardless of how a stage was cloned, the clone does not include any of the files from the source. i.e. all cloned stages are empty.
Refer link for details – https://docs.snowflake.com/en/user-guide/object-clone.html#cloning-and-stages
Question 7:
Which file format option should be enabled while loading a JSON file to remove the outer most array structure and load the records in separate table rows
A. REMOVE_OUTER_ARRAY
B. ELIMINATE_OUTER_ARRAY
C. STRIP_OUTER_ARRAY
D. READ_INNER_ARRAY
Answer: C
Explanation:
Refer Link – https://docs.snowflake.com/en/user-guide/semistructured-considerations.html
In general, JSON and Avro data sets are a simple concatenation of multiple documents. The JSON or Avro output from some software is composed of a single huge array containing multiple records. There is no need to separate the documents with line breaks or commas, though both are supported.
Instead, we recommend enabling the STRIP_OUTER_ARRAY file format option for the COPY INTO
command to remove the outer array structure and load the records into separate table rows
Question 8:
Each Snowflake account comes with two shared databases. One is a set of sample data and the other contains Account Usage information. Check all true statements about these shared databases.
A. SNOWFLAKE_SAMPLE_DATA contains a schema called ACCOUNT_USAGE
B. SNOWFLAKE contains a schema called ACCOUNT_USAGE
C. ACCOUNT USAGE is a schema filled with secure views
D. SNOWFLAKE contains a table called ACCOUNT_USAGE
E. ACCOUNT_USAGE is a schema filled with external tables
F. SNOWFLAKE_SAMPLE_DATA contains several schemas from TPC (tpc.org)
Answer: F
Explanation:
Here’s a breakdown of the statements about Snowflake shared databases with the correct ones checked:
Partially True:
SNOWFLAKE_SAMPLE_DATA contains several schemas from TPC (tpc.org): This statement is partially true. Snowflake Sample Data does contain some schemas based on TPC benchmarks, but it may not include all of them.
False:
SNOWFLAKE contains a schema called ACCOUNT_USAGE: False. The system-defined, read-only shared database for Account Usage information is named SNOWFLAKE, not just a schema within it. This database contains a schema called ACCOUNT_USAGE.
ACCOUNT USAGE is a schema filled with secure views: False. The ACCOUNT_USAGE schema contains tables, not secure views.
SNOWFLAKE contains a table called ACCOUNT_USAGE: False. As mentioned above, the ACCOUNT_USAGE information resides within a schema of the same name inside the SNOWFLAKE shared database.
ACCOUNT_USAGE is a schema filled with external tables: False. The ACCOUNT_USAGE schema contains regular tables, not external tables. External tables point to data residing outside of Snowflake.
Therefore, the only true statement is:
SNOWFLAKE_SAMPLE_DATA contains several schemas from TPC (tpc.org) (partially true)
Question 9:
While loading data through the COPY command, you can transform the data.
Which of the below transformations is not allowed?
A. Omit columns.
B. Cast.
C. Truncate columns.
D. Filters.
E. Reorder columns.
Answer: D
Explanation:
Filters is the transformation that is not allowed during the COPY command.
While the COPY command offers powerful data transformation capabilities, including:
Omitting columns: You can specify which columns to load, ignoring others.
Casting: You can convert data types between supported formats.
Truncating columns: You can shorten text values to fit a specific length.
Reordering columns: You can change the order of columns in the target table.
It does not support filtering rows based on specific conditions. For filtering, you would typically use a separate SQL query or a data pipeline tool before loading the data into the target table.
Question 10:
A team runs the same query daily, generally with a frequency of fewer than 24 hours, and it takes around 10 minutes to execute. They realized that the underlying data changes because of an ETL process that runs every morning.
How can they use the results cache to save the 10 minutes that the query is being executed?
A. After the ETL run, use Time-Travel feature.
B. After the ETL run, copy the tables to another database for the team to query.
C. After the ETL run, increase the warehouse size. Decrease it after the query runs.
D. After the ETL run, execute the identical queries so that the result remains in the cache.
Answer: D
Explanation:
In this case, because the underlying data changes every morning due to the ETL process, the results cache may not be useful for the daily query execution. However, suppose the team executes an identical query immediately after the ETL process runs. In that case, the results of that query will be stored in the results cache and can be retrieved for subsequent queries. By doing so, the team can save the 10 minutes that the query is being executed by retrieving the results from the cache.
For a full set of 1564 questions. Go to
https://skillcertpro.com/product/snowflake-snowpro-core-exam-questions/
SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
SkillCertPro updates exam questions every 2 weeks.
You will get life time access and life time free updates
SkillCertPro assures 100% pass guarantee in first attempt.