Hypothesis Testing Part II: How to Conduct a Hypothesis Test in Excel and Google Sheet
Ed Direction Data Fellows Asynchronous Module
August 2022
Ed Direction Data Fellows Asynchronous Module
August 2022
Welcome to the Hypothesis Testing Part II Asynchronous Module!
Where We've Been, Where We're Headed
In this module we will build on the concepts introduced in Hypothesis Testing Part I, where we explored the underlying statistical principles that make hypothesis testing possible. Here, we will introduce the different types of hypothesis tests you can conduct and will provide a step by step guide for how to conduct them in Excel. We follow the Excel Guide with a a description of how to perform hypothesis tests in Google Sheets.
Before beginning this module, we highly recommend that you complete the previous asynchronous modules: Calculating Descriptive Statistics in Excel/Google Sheets and Hypothesis Testing Part I.
We introduced these requests in the previous session and want to reemphasize them before completing our exploration of hypothesis testing.
Please approach this module with a growth mindset. As we describe the core statistical underpinnings that make hypothesis testing possible, we ask that you believe in your internal capacity to understand these ideas. If you will commit to working through this module, we will commit to explaining these principles as clearly as we can.
Complete this module at a comfortable pace. Please do not feel like you have to power through the content in one sitting. If you want to break up it into two or three sessions to let the information sink in, please do!
Read the language carefully. The principles we are about to introduce are explained very specifically to provide accurate descriptions of what hypothesis tests enable us to say and not say. We will do our best to accurately describe the essential nuances that make these tests possible without overloading you with too much information.
Please realize that you will not need to do any math by hand in order to perform a hypothesis test. Excel will do that for you. Throughout the module we will show you some of the math that makes hypothesis testing possible in order to build your understanding of how and why these concepts work. But rest assured that if solving math problems by hand is not your thing, you will still be able to conduct these analytical tests on your RSSP data.
Have fun! If statistics is new to you, it can be intimidating. But if you embrace the stress and work to reason through these beautifully intuitive concepts, you’ll be surprised at how much fun deeper data analysis can be.
Click on the button to the left to open the note-catcher, which is mirrored to follow the content as it is presented on the Learning Space. As you navigate through this module, you are welcome to use this optional tool to capture your notes.
Key Terms Used in This Module:
Variance: the average squared distance between an observed value and its mean.
Standard Deviation: the average distance between an observed value and its mean.
Test Statistic: the relative distance between an observed value and the expected value under the Null Hypothesis.
P-Value: the probability of observing a value at least as extreme if the Null Hypothesis is correct.
Case Study: Cache Valley ISD
Prior to the start of the 2021-2022 school year, the Central Peaks ISD RSSP team had selected 3rd grade reading as its focus area and HQIM as its intervention strategy. The district's RSSP team was especially concerned with the performance of ELA and Economically Disadvantaged students compared to their Non-ELA and Non-Economically Disadvantaged counterparts. Accordingly, the RSSP team had three central questions going into the school year:
Will our 3rd grade students this year (who were 2nd grade students last year), perform the same, better, or worse on their STAAR Reading assessments after a full year of receiving the implementation of HQIM from their teachers?
At the end of the year, will the difference in scores between our Non-ELA and ELA students be statistically significant?
At the end of the year, will the difference in scores between our Non-Economically Disadvantaged and Economically Disadvantaged students be statistically significant?
The RSSP team, district leadership, and school leadership worked hard all year to ensure that the implementation of HQIM was done with fidelity, and during the following Summer, the STAAR Reading scores were released. The Central Peaks Data Fellow transferred the newly released data into an Excel spreadsheet and began working to answer the RSSP team's three central questions.
During the second half of this module, you will be given access to Central Peaks' STAAR Reading scores and will be tasked with answering the district's central questions.
Part I: Choosing a Distribution
Z-Tests vs T-Tests
Before you can perform hypothesis tests on Central Peaks' STAAR Reading scores, there are a few final concepts you will need to know. They are:
Z-Tests vs T-Tests
Dependent Sample T-Tests vs Independent Sample T-Tests
Two-Tailed T-Tests vs One-Tailed T-Tests
After introducing these concepts, we will then use a step by step guide for how to conduct a hypothesis test in Excel, followed by a description of how to perform them in Google Sheets.
A z-test refers to a hypothesis test that uses the Z-Distribution. The Z-Distribution is pictured to the right, and as you can see, it is a standard normal distribution that exhibits the same special characteristics we learned about in the previous module.
When conducting a z-test, we pair our test statistic with the Z-Distribution to determine how many standard deviations (technically standard errors), our observed value is away from the mean under the Null Hypothesis. Because the Z-Distribution is a normal standard distribution, the mean value will always be in the center.
In order to use a Z-tests, you have to meet two criteria: Your sample size (the number of people in your data set) needs to be sufficiently large (typically defined as 30 or more) and you need to know the standard deviation of the overall population you are analyzing.
Image source from w3schools
This second qualification -- that you need to know the standard deviation of the population -- is a difficult requirement to meet.
In the previous module, we explained the difference between a population and a sample. In that module, we described that a population consists of everyone who belongs to a defined group. We also described that a sample is a subset of everyone who belongs to a defined group.
The difference between a population and a sample is one of the most important concepts in all of statistics. If we look at the definition of statistics, we'll see why:
Statistics is the practice or science of collecting and analyzing numerical data in large quantities, especially for the purpose of inferring proportions of a whole from those in a representative sample. (Oxford Languages)
From this definition we can surmise that the goal of statistics is to use the values we find in samples to estimate the values inherent within populations.
We use samples because they are much easier (and cheaper) to measure. For example, the U.S government needed 14.2 billion dollars to conduct the most recent census. Despite the incredible resources dedicated to measure the nation’s population, some subgroups within the the U.S. were still undercounted.
By using statistics, we can use much smaller samples to calculate accurate approximations of the U.S population. For example, if we wanted to know the average income of adults in the United States, we could attempt to measure the income of all 258 million adults in the U.S. Alternatively, we could obtain a random sample of 1,000 adults, calculate their average income, and then use that average to estimate the average income of every adult within the United States.
How does this relate to z-tests? It relates because you rarely know the actual standard deviation of a particular value for a population. This means that you are rarely able to use z-tests in your work.
A t-test refers to a hypothesis test that uses a T-Distribution. T-Distributions are similar to Z-Distributions because they are symmetrical and their probability densities are known -- enabling us to use test statistics to perform hypothesis tests.
T-Distributions are different from Z-Distributions because they do not assume that the population standard deviation is known, and the shape of the distribution is determined by its degrees of freedom. You can see a visualization of the T-Distribution family on the right.
In circumstances when you are working with a sample size that is small and/or you do not know the standard deviation of the population, you should use a t-test.
Degrees of freedom are defined as the number of independent pieces of information within a distribution. Degrees of Freedom are highly related to your sample size – the number of people in your dataset -- and they are a notoriously tricky subject in statistics. Luckily, you do not need an in-depth knowledge of Degrees of Freedom to perform a hypothesis test in Excel.
However, if you want to develop an understanding of what they are and why they are important, we recommend starting with the two videos below. If you are feeling extra ambitious, you could also listen to this episode of Quantitude on Information Theory. If you are not interested in learning about degrees of freedom, feel free to skip over these linked resources!
The image on the right, taken from JMP’s Statistics Knowledge Portal, visually compares the Z-Distribution to a few T-Distributions.
This visualization has two key points:
The T-Distribution we use for our hypothesis test depends on our degrees of freedom, which is heavily related to our sample size.
As the degrees of freedom increase, the T-Distribution more closely resembles the Z-Distribution (a standard normal distribution).
When working with your RSSP Data, we recommend using t-tests instead of z-tests. We make this recommendation for two reasons:
While there is an argument to be made that we could theoretically know the standard deviations of the population we are studying (students in our RSSP focus areas), we also have to consider that we will have missing data for students. Whether they transfer into or LEA out of our LEA, we may not have the capacity to measure everything we need to make the credible claim that know the true standard deviation of our target population.
As our sample sizes increase, so too do our degrees of freedom, meaning that the T-Distribution on which we base our hypothesis test will more closely resemble the Z-Distribution.
The flexibility provided to us by T-Distributions is why we recommend you conduct t-tests on your RSSP data.
There are four main categories of T-Tests we can conduct. The types are:
Dependent Sample (Paired)
Independent Sample (Unpaired)
Two-Tailed
One-Tailed
The test we use depends on the attributes of our data and the analytical question we are asking. Below is an user-friendly guide that explains when, how, and why to use each test.
We use dependent sample t-tests when we are comparing the repeated measurements of the same people over time. In your work as a Data Fellow, you can use a dependent sample t-test when you track the performance of students over time. For example, if you compare BOY and MOY 3rd grade reading scores on unit assessments, you will use a dependent sample t-test to determine if the differences between BOY and MOY scores are statistically significant.
In this instance, you are comparing a group of students against themselves. This means that the students in your first group (at the beginning of the year) and the students in your second group (in the middle of the year) are not independent from each other – they are the same students just at different times. Another example would be if you were to compare the scores of 4th grade economically disadvantaged students in your LEA before an RSSP intervention strategy was implemented and after it was implemented.
Dependent Sample T-Tests are often called Paired T-Tests because the observations are “paired” together – student A's score at time one and student A’s score at time two.
When conducting a Dependent Sample T-Test, ensure that the number of students (rows in your spreadsheet) are the same for your two groups. It is very likely that students will transfer in and out of your LEA throughout the school year. As part of your job, you will need to ensure that when you conduct your dependent sample t-test, you will have cleaned your dataset to include only the students who have been at the school throughout the entire school year.
If you include other students who have transferred into your school within your t-test, you are actually analyzing the effect of their previous school’s RSSP strategy + the effect of your school’s RSSP strategy on your RSSP goals. Because you are not attempting to measure the influence of a previous district's intervention on your RSSP goals, we recommend not including transfer students in your t-test. However, you can feel free to ignore this advice if you and your RSSP team believe that transfer students should be included.
We use independent sample t-tests when we are comparing the means of two different groups. In your work as a Data Fellow, you could use independent sample t-tests if you are comparing academic performance between two different groups of students. For example, if you wanted to determine if there was a statistically significant difference between the test scores between Non-ELA and ELA students, an independent sample t-test would be the best option.
Independent Sample T-Tests are often called "unpaired t-tests" because the values in the t-test are not paired -- they are not two measurements of the same person. They are measurements of different people. Because the data are not paired, we do not need to have the same number of people in both groups. For example, if you want to conduct an independent t-test on 84 Non-ELA students and 51 ELA students, you can.
Within the independent sample t-test category, there are two tests from which you can choose:
An Independent T-Test that assumes equal variance between your two groups.
An Independent T-Test that assumes unequal variance between your two groups.
We will teach you later in the module how to use Excel and Google Sheets to compare the variances of your two groups so that you can know which independent t-test you should choose.
In the previous module, we explained that variance is a measure of how much a variable changes in relation to its mean. If you would like a refresher, please visit the first asynchronous module Hypothesis Testing Part I.
If we revisit the visualization of the normal distribution and the 68-95-99.7 rule from the last module, we can see that 5% of the observations of a normal distribution are two standard deviations away from the mean. These areas are represented by the green areas of the picture. The visual below shows 2.5% of the observations two standard deviations below the mean and 2.5% of the observations two standard deviations above the mean.
When we conduct a Two-Tailed T-Test, we are attempting to determine if the difference between the mean of two groups falls within one of the green areas. When we choose to use a Two-Tailed Test, we are effectively saying "I want to see if the difference between the means of my two groups is statistically significant in the positive or negative direction".
An example of this in your work as a Data Fellow would be if you wanted to conduct a t-test to compare the mean test scores of economically disadvantaged students before and after an RSSP intervention strategy was implemented to see if there was a statistically significant difference in the positive or negative direction.
This is an uncomfortable idea. If our LEA has spent significant time and resources working to implement an RSSP strategy, the hope is that the strategy would improve student learning. Unfortunately, there exists the real possibility that students will actually do worse under the new intervention than they would have done if they were allowed to continue learning under the previous status quo. While we can not fully determine which learning environment is better for students (the status quo or the new intervention) without establishing a control group and a treatment group, we should recognize that student scores may become worse under the new RSSP strategy than they were before the strategy was implemented. And if they are worse, we want to give ourselves the ability to detect that effect. That is only possible if we use a Two-Tailed T-Test.
Compare the image of the distribution in the previous section to the two images featured below, sourced from UCLA.
You will notice that they key difference between the image in the previous section and these two images is that the shaded area that comprises 5% of the distribution is not split between the extreme negative and extreme positive sides of the distribution. Instead, in these images, the 5% area of the distribution is focused completely to the extreme negative side of the mean or to the extreme positive side of the mean.
When we conduct a One-Tailed T-Test, we are effectively saying that we are confident our intervention is only going to have a negative effect OR a positive effect. By conducting a one-tailed t-test, we are giving ourselves a better chance of detecting an effect on one side of the distribution, if an effect actually exists. The catch is that by giving ourselves a greater chance of detecting an effect on one side of the distribution, we make it impossible to detect an effect on the other side of the distribution. We are putting all of our statistical eggs in one basket.
This introduces significant risk into our analysis. While it does provide us with the advantage of increasing the possibility that we are able to detect a positive effect, if it exists, it also eliminates our ability to detect a negative effect, if it exists. Because of this, we strongly encourage you to conduct two-tailed tests when working with your RSSP data. As mentioned above, it is very difficult to be certain that our RSSP interventions are guaranteed to have a positive effect on student performance. If we assume our intervention is having a positive effect on student performance and conduct a one-tailed test, we run the risk of not detecting a negative effect. If our RSSP intervention is influencing students in a negative way, that is information we would want our RSSP team to know.
Now that you have an enhanced understanding of the what and why behind t-tests, watch this Stat Quest video in which Josh Starmer gives fantastic advice on which t-test you should use most frequently.
To reemphasize the point he makes, if you are unsure about which t-test to run, choose a two-sided test that assumes unequal variance. This is the hardest test to pass, meaning that you give yourself the smallest chance of being wrong.
Statistics, and science in general, prefers to be conservative. We would much rather fail to detect a difference between groups that is there than detect a difference between groups that isn’t there. This is why we generally default to stricter tests.
However, if you have a strong understanding behind the principles of t-tests, you have the opportunity to carefully select the right test for the right situation, correctly giving yourself a higher chance of detecting a difference, if it exists.
We finally have the knowledge we need to perform hypothesis tests!
Note: Unlike the previous module on calculating descriptive statistics, there are some differences between Microsoft Excel and Google Sheets when conducting t-tests. When using Excel, we have the capacity to add the Data Analysis Toolpak, which enables us to perform more advanced statistical tests on our data. While Google Sheets does not have a comparable add on, it is still able to compute hypothesis tests.
While the guide below specifically focuses on how to conduct the t-tests in Excel, please read through it even if you only have access to Google Sheets. Reading through each step will make understanding how to write formulas for t-tests (which you can do in Google Sheets and will be explained at the end of the module) much easier.
It's also worth noting that many dashboard platforms are able to perform t-tests and many are not. If the platform you are using has the capability to perform t-tests, the knowledge you have gained from this module should enable you to figure out how to do it.
With those notes in mind, let's get started.
Prep Step 1: Add the Data Analysis Toolpak
Follow these instructions to add the Data Analysis Toolpak to Excel
Prep Step 2: Access the Data
Copy and Paste the Central Peaks Data from the Google Sheet into an Excel Spreadsheet.
How to Conduct a Two-Tailed Dependent T-Test
Step 1. Select the function icon (fx)
Step 2. Search for and select the "ttest" function
Step 3. Enter the conditions for the t-test
Click in the box next to "Array 1" and select the scores under the "Pre-HQIM" column in the Central Peaks ISD spreadsheet. The array, or list of cells, should read A3:A175.
Click in the box next to "Array 2" and select the scores under the "Post HQIM" column in the Central Peaks ISD spreadsheet. The array, or list of cells, should read B3:B175.
Click in the box next to "Tails" and enter "2". This indicates you want to conduct a two-tailed test.
Click in the box next to "Type" and enter "1". This indicates that you want to conduct a dependent sample t-test.
Step 4. Interpret test results
As soon as you enter your test conditions, Excel will generate the p-value for you. You can see the p-value underneath the conditions you entered.
In this case, our p-value is .926. This means that if the null hypothesis is true and the difference between the means of our two groups is in fact 0, there is a 92.6% chance that we would observe a difference at least this large in the means in of our two groups
If you remember from the last module, we only reject the null hypothesis if our p-value is at or less than .05. Because our p-value is above .05, we fail to reject the null hypothesis.
This means that, according to the data and our test, there is no statistically significant difference between the average student test score before HQIM was implemented and after HQIM was implemented. Bummer.
How to Conduct a Two-Tailed Independent T-Test that Assumes Equal Variance
Step 1. Calculate the variance of test scores for Non-ELA and ELA students
In order to know if we should conduct an independent t-test that assumes equal variance, we first need to find the variances of our two groups and compare them.
We find the variances of each group by using the simple function =VAR.S()
The VAR stands for "variance" and the .S stands for "sample". This means we are calculating the variance for a sample.
Inside of the parenthesis, you will enter the range of values for which you want to find the variance. In our case, the range of values is D3:119. Your range may be different if you copied and pasted the data in a different location on your spreadsheet.
In total, your function should look like this: =VAR.S(D3:D119)
Once you hit enter, the value 27289.28706 should appear. That is the variance for Central Peaks' Non-ELA student test scores.
Repeat the process again to find the variance of ELA test scores. If your spreadsheet looks exactly like the Google Sheet attached above, you should use the following function and data range to calculate the variance: =VAR.S(E3:E58)
When you press enter, the value 25524.17143 should appear. That is the variance for Central Peaks' ELA student test scores.
Step 2. Perform an F-Test
An f-test is very similar to a t-test. The null hypothesis for our f-test is that there is no difference in the variances of our two groups. The alternative hypothesis is that there is a difference in the variances of our two groups. We will determine if we reject the null hypothesis if the p-value generated by the f-test is less than .05.
To perform an f-test, you will use the simple Excel function =FTEST().
You will then enter the values ranges you want to test.
It is recommended that the first range of values you insert into the function is the range with the higher variance. In our case, we will enter the scores of Non-ELA students first and the scores of ELA students second.
If your looks exactly like the Google Sheet attached above, your function should look like this: =FTEST(D3:D119, E3:E58)
When you hit enter, the number .7955 should appear. This number is our p-value. Because this p-value is greater than .05, we fail to reject the null hypothesis. In other words, we do not have strong enough evidence to claim that the variances of our two groups are statistically different. Or, using a simpler explanation, the variance between Non-ELA and ELA test scores is roughly equal.
This means we will choose to conduct a two-tailed t-test that assumes equal variance.
Step 3. Navigate to the functions window in Excel and select TTEST
Step 4. Enter the conditions for the t-test
Click in the box next to "Array 1" and select the scores under the "Non-ELA" column in the Central Peaks ISD spreadsheet. The array, or list of cells, should read D3:D119.
Click in the box next to "Array 2" and select the scores under the "Post HQIM" column in the Central Peaks ISD spreadsheet. The array, or list of cells, should read E3:E58.
Click in the box next to "Tails" and enter "2". This indicates you want to conduct a two-tailed test.
Click in the box next to "Type" and enter "2". This indicates that you want to conduct an independent sample t-test that assumes equal variances.
Step 5. Interpret test results
As soon as you enter your test conditions, Excel will generate the p-value for you. You can see the p-value underneath the conditions you entered.
In this case, our p-value is .0017. This means that if the null hypothesis is true and the difference between the means of our two groups is in fact 0, there is a .17% chance (note, not a 17% chance, but a .17% chance) that we would observe a difference in the means of our two groups that is at least as extreme as what we are observing.
This t-test generated a p-value that was less than .05. We can therefore reject the null hypothesis
This means that, according to the data and our test, there is a statistically significant difference between the average non-ELA student test score and the average ELA student test score.
Note. The p-value does not tell us how large that difference is. It simply tells us that the difference is statistically significant.
How to Conduct a Two-Tailed Independent T-Test that Assumes Unequal Variance
Step 1. Calculate the variance of test scores for Non-EcoDis and EcoDis students.
In order to know if we should conduct an independent t-test that assumes unequal variance, we first need to find the variances of our two groups and compare them.
We again find the variances of each group by using the simple function =VAR.S()
To find the variance of Non-EcoDis test scores, your function will look like this (if your spreadsheet looks exactly like the Google Sheet attached above): =VAR.S(G3:G96)
Once you hit enter, the value 41533.08591 should appear. That is the variance for Central Peaks' Non-EcoDis student test scores.
Repeat the process again to find the variance of EcoDis test scores. If your spreadsheet looks exactly like the Google Sheet attached above, you should use the following function and data range to calculate the variance: =VAR.S(H3:H81)
When you press enter, the value 6868.82246 should appear. That is the variance for Central Peaks' EcoDis student test scores.
Step 2. Perform an F-Test
If your looks exactly like the Google Sheet attached above, your function should look like this: =FTEST(G3:G96, H3:H81)
When you hit enter, the number 2.61124E-14 should appear. In this case, the p-value is so low that Excel needed to use scientific notation to express it. In its non-scientific notation form, the number is .00000000261124. Because the p-value is less than .05, we can reject the null hypothesis.
This means that we have gathered enough evidence to credibly claim that difference between the variances of our two groups is statistically significant. We will therefore choose to conduct a two-tailed t-test that assumes unequal variance.
Step 3. Navigate to the functions window in Excel and select TTEST
Step 4. Enter the conditions for the t-test
Click in the box next to "Array 1" and select the scores under the "Non-EcoDis" column in the Central Peaks ISD spreadsheet. The array, or list of cells, should read G3:G96.
Click in the box next to "Array 2" and select the scores under the "EcoDis" column in the Central Peaks ISD spreadsheet. The array, or list of cells, should read H3:H81.
Click in the box next to "Tails" and type "2". This indicates you want to conduct a two-tailed test.
Click in the box next to "Type" and type "3". This indicates that you want to conduct an independent sample t-test that assumes unequal variance.
Step 5. Interpret test results
As soon as you enter your test conditions, Excel will generate the p-value for you. You can see the p-value underneath the conditions you entered.
In this case, our p-value is .822. This means that if the null hypothesis is true and the difference between the means of our two groups is in fact 0, there is a 82% chance that we would observe a difference in the means of our two groups that is at least this extreme.
Because this t-test generated a p-value that was greater than .05, we fail to reject the null hypothesis.
This means that, according to the data and our test, there is not statistically significant difference between the average Non-EcoDis student test score and the average EcoDis student test score.
Using Formulas to Perform T-Tests in Google Sheets and Excel
The above step-by-step guide was created to help you see exactly how to make the decisions required to conduct t-tests in spreadsheets. Using the knowledge you now have from reviewing the step-by-step guide, you are able to perform t-tests using formulas in Excel AND Google Sheets. The structure of the formula is the same for both platforms and is as follows:
=TTEST(Range 1, Range 2, How Many Tails, Test Type)
For example, if I wanted to conduct a two-tailed independent sample t-test that assumes unequal variance -- similar to the example above -- I would use the formula =TTEST(G3:G96, H3:H81, 2, 3)
If you look closely, the formula contains the exact same elements we used in the Function Argument screen in the step-by-step guide. We simply insert the cell ranges for each of our groups, list the number of tails we want in our test, and insert the test type where
1 = Dependent t-test
2 = Independent t-test that assumes equal variance
3 = Independent t-test that assumes unequal variance.
Central Peaks ISD
Having conducted three hypothesis tests on Central Peaks ISD's RSSP data, you are now in a position to answer the central questions posed at the beginning of this module.
There was no statistically significant difference in 3rd grade reading scores after the district implemented HQIM.
At the end of the year, there was a statistically significant difference between the average reading score of ELA and non-ELA students.
At the end of the year, there was not a statistically significant difference between the average reading scores of Non-Economically Disadvantaged and Economically Disadvantaged students.
Conclusion
Congratulations! You have completed both modules on Hypothesis Testing and are now ready to perform these statistical tests on your LEA's RSSP data!
You may be surprised how easy it is to conduct a T-Test in Excel and Google Sheets. You also might be wondering why, if it is this easy, did we spend two modules explaining the mechanics of hypothesis testing in such great detail? The reason is because we want to build your capacity as a data analyst. Showing you which buttons to press in Excel does not enhance your ability to understand and analyze data. In fact, one of the biggest problems in data analysis is that the tools we use have become so sophisticated that they no longer require an understanding of statistics to use them. This leads to analysts and researchers misusing tools and unknowingly generating faulty conclusions. We do not want this to happen to you.
Because you've completed both of the modules related to hypothesis testing, you now have the knowledge and intuition required to make informed choices about how to determine if your data are statistically significant. This ability to reason through the steps required to make the correct analytical choices will help you add tremendous value to moving your LEA's RSSP work forward.
Congratulations on completing the Hypothesis Testing Part II module. Please complete the Exit Ticket form by clicking on the link below. We will use the information you submit to track your completion.