Validity

It is impossible to directly assess the knowledge and understandings in the brain of a student. What teachers instead try to do is use carefully selected proxies (assessment tasks) to provide evidence in order to make valid inferences on the knowledge and understandings of the student (Christodoulou, 2016).

The idea of validity in assessment is a key lynchpin of all assessment tasks and inferences drawn from assessment data (Christodoulou, 2016). Three perspectives are considered in determining validity, “the form of the measure, the purpose of the assessment, and the population for which it is intended.” (Dirksen, 2013). Masters (2013) argues that validity focuses on how fit for purpose the assessment is for the domain being assessed. Darr (2005a) notes that “Judging validity cannot be reduced to a simple technical procedure. Nor is validity something that can be measured on an absolute scale. The validity of an assessment pertains to particular inferences and decisions made for a specific group of students.” (p.55). Inferences drawn from the data that assessment generates, is the foundation of the ACT system. Bennett (2011) argues that for an assessment to be valid, it should be supported with data that shows that different observers would draw the same inferences from the same evidence.

Consider:

Is this suite of tasks fit for purpose?

Do the methods of assessing knowledge, understanding and skills suit the intentions of the assessment?

These guidelines focus on six areas of validity.

Validity can be affected by six factors which form the core of the quality assessment guidelines:

· coverage of the curriculum

· reliability

· bias

· provision for a range of thinking levels

· student engagement

· academic integrity.

An interesting reading on Validity can be found here: VALIDITY

Masters (2013) argues that validity focuses on how fit for purpose the assessment is for the domain being assessed.