A system, method, and related techniques are disclosed for scoring user responses to constructed response test items. The system includes a scoring engine for receiving a user response to a test question and evaluating the response against a scoring rubric. The scoring rubric may include a binding stage, an assertion stage, and a scoring stage. Furthermore, the system includes a database for referencing elements used by the scoring engine which may comprise objects, object sets, attributes of objects, and transformations of any elements.
"With 6-7x more constructed responses to grade annually for STAAR, maintaining full human scoring would have cost $15-20M more per year." - TEA Student Assessment
Cambium Assessment’s Machine Learning team combines the latest methods in psychometrics, item development, language modeling, and data science to improve how we assess students.
Cambium's practices are deeply grounded in psychometrics. They always aim to provide accurate, reliable, and fair information to empower educators to improve student learning. The faster we can return high-quality results, the faster our clients, teachers, and students can act.
Cambium use machine learning in key ways to improve how we assess students. Our automated scoring engines can predict the same scores that professionally-trained scorers assign for essays and constructed-response items. Our engines consistently demonstrate comparable performance to human scorers.