CKT Matter Assessment
This summative assessment tool has been developed by our project team to support teacher educators in understanding preservice teachers' CKT for matter.
We have a new publication about the impacts of our work!
This summative assessment tool has been developed by our project team to support teacher educators in understanding preservice teachers' CKT for matter.
Our 52-item CKT matter assessment which takes about an hour to complete is freely available. Our assessment platform supports two types of users:
Individual Users:
If interested in taking the CKT matter assessment for your own interest and professional development, please register here: https://cktmatter.com/enroll/cktmatter
You will be able to take the assessment and receive an automatically produced score report summarizing your test results.
Teacher Educators:
If interested in assigning the CKT matter assessment to your class of pre-service elementary teachers, please login here: https://cktmatter.com/
Follow the directions in the Teacher Educator Manual to create an account and setup a data collection for your class.
Once your class of pre-service elementary teachers takes the assessment, you will receive an automatically produced score report summarizing your class’s performance.
You can learn more about the assessment in this recording of our informational webinar.
Learn more about the assessment development by using the drop-down menus below:
In this project, we have been using the ‘Work of Teaching Science’ (WOTS) framework (Mikeska, Kurzum, Steinberg, & Xu, 2018) to guide the development of our CKT assessment instrument about matter and its interactions. The full WOTS framework:
focuses on seven instructional tools (e.g., scientific models and representations; scientific investigations; etc.) that elementary science teachers interact with.
includes 27 science teaching practices that research has indicated or hypothesized are critical for beginning elementary science teachers to know how to do well to be effective practitioners.
is organized into two to six science teaching practices per instructional tool category (e.g., the scientific explanations tool category includes two science teaching practices — one about critiquing student generated explanations and another about selecting explanations of scientific phenomena).
To create each CKT assessment item, we used content-specific resources, such as research studies, educative curriculum guides and materials, and practitioner literature and knowledge, to generate instructional scenarios that involve elementary science teachers in leveraging their subject matter knowledge as they engage in these specific science teaching practices. This work involved identifying the content challenges elementary teachers face when teaching science within each of five key topic areas within matter and its interactions:
properties of matter and their measurements;
changes in matter;
conservation of matter;
model of matter; and
materials.
These challenges include the work teachers do to identify and select instructional activities, science phenomena, demonstrations, and models used within each of these topic areas. The instructional scenarios were intended to directly map onto the science teaching practices described in the WOTS framework.
Each CKT matter item was aligned to one of the five specific topics (e.g., changes in matter) and to a specific science teaching practice within one of the seven WOTS instructional tool categories (e.g., scientific models).
A key part of this project’s work involves determining how CKT assessment items can be assembled into a valid and reliable assessment instrument to measure pre-service elementary teachers’ CKT proficiency in one science area: matter and its interactions. To develop this CKT assessment and the individual CKT items about matter and its interactions, we used:
the principles of evidence-centered design (Mislevy & Risconscente, 2006) and
a process closely modeled after the item development work on the Measures of Effective Teaching (MET) Project (Phelps, Weren, Croft, & Gitomer, 2014) and an earlier NSF research project (Mikeska, Phelps, & Croft, 2017).
Our CKT assessment development process includes the steps shown in the figure below.
The purpose of the score report is to provide meaningful and actionable information to teacher educators about their elementary pre-service teachers’ performance on the CKT assessment about matter and its interactions. To create the teacher educator score report, our team drew upon design principles outlined by Zapata-Rivera et al. (2012), Zenisky and Hambleton (2012), and Zwick et al. (2014), such as providing multiple representations of both individual and class performance and making connections to instructional materials to support pre-service teacher CKT development. We also refined the initial teacher educator score report based on feedback from our project’s advisory board and a convenience sample of seven elementary teacher educators who had extensive experience in science teacher education.
Score Report Components
The teacher educator score report includes five main sections, each on its own tab in an Excel spreadsheet:
Section 1 (score report overview) describes the purpose of the score report and presents information about the score report content and how to navigate across the different sections of the report.
Section 2 (scale scores) describes individual pre-service teachers’ scores and aggregated class scores on the CKT assessment about matter and its interactions at the pretest and posttest time points, using various tables and graphs. This section also includes information about measurement error, comparison of the class scores to a reference population, and levels of performance.
Section 3 (class summary) provides class summaries of the pre-service teachers’ performance by content areas and Work of Teaching Science (WOTS) categories for the pretest and posttest.
Section 4 (item map) is a tool to help interpret the pre-service teachers’ CKT test scores. The item map ranks a subset of the CKT assessment items by difficulty, provides a summary of the skills assessed by the item, and maps the difficulty level to a scale score.
Section 5 (acting on your results) provides information about example CKT matter items and instructional packets for use by elementary science teacher educators. This section describes the content areas and WOTS categories included in the CKT assessment and provides links to CKT item examples and CKT packets that can be used to guide instruction based on assessment results.
This video provides a short overview of the teacher educator score report and illustrates a few of the report’s interactive feature.
Mikeska, J.N., Kurzum, C., Steinberg, J., & Xu, J. (2018). Assessing elementary science teachers’ content knowledge for teaching science for the ETS Educator Series: Pilot results. ETS Research Report Series. Princeton, NJ: Educational Testing Service. doi:10.1002/ets2.12207
Mikeska, J.N., Phelps, G., & Croft, A. (2017). Practice-based measures of elementary scienceteachers’ content knowledge for teaching: Initial item development and validity evidence. ETS Research Report Series. Princeton, NJ: Educational Testing Service. doi:10.1002/ets2.12168
Mislevy, R. J., & Riconscente, M. M. (2006). Evidence-centered design: Layers, concepts, and terminology. In Downing, S., & Haladyna, T. Mahway (Eds.), Handbook of test development. New Jersey: Erlbaum.
Phelps, G., Weren, B., Croft, A., & Gitomer, D. (2014). Developing content knowledge for teaching assessments for the Measures of Effective Teaching study. ETS Research Report Series. Princeton, NJ: Educational Testing Service. doi: https://doi.org/10.1002/ets2.12031
Zapata-Rivera, D., VanWinkle, W., & Zwick, R. (2012). Applying score design principles in the design of score reports for CBAL teachers. ETS Research Memorandum, 12-20.
Zenisky, A. L., & Hambleton, R. K. (2012). Developing test score reports that work: The process and best practices for effective communication. Educational Measurement: Issues and Practice, 31(2), 21-26.
Zwick, R., Zapata-Rivera, D., & Hegarty, M. (2014). Comparing graphical and verbal representations of measurement error in test score reports. Educational Assessment, 19(2), 116-138.