Multiple assessments (including, but not limited to, direct assessments of student learning; e.g., essays, exam items, assignments, presentations, etc. and rubrics/scoring guides as appropriate) appropriate for each learning outcome being measured.
Direct assessment in which student work (essays, exam items, assignments, presentations, etc. and rubrics/scoring guides as appropriate) has been selected that is appropriate for each learning outcome being measured
Only indirect assessments, that do not directly examine student work, are being used.
Possibilities include student self-perception of ability, grades not specifically linked to outcomes, faculty evaluations that are not linked to student work.
Raw scores were translated as ACTFL levels in the following fashion:
In the examples above,
Multiple assessments (direct and indirect) are used to set expectations and to assess student learning.
This rubric can be used to directly assess student ability related to aspects of the department's communication learning outcome and the ACTFL oral proficiency standard.
Additionally, the rubric can be used for both formative (shared with students to indicate areas of strength and where improvement is desirable) and summative assessment.
As the presentations were recorded, it was possible for multiple faculty members to score each piece of student work.
The can do statements can be used as indirect assessment to encourage self-reflection on progress toward language goals and in combination with direct assessments to ascertain student perceptions of ability as compared to faculty determinations.
The ACTFL Proficiency Guidelines are useful tools for articulating standards that describe the learners functional language ability as determined by experts in secondary language acquisition.
Student work is an appropriately collected sample (simple random or systematic), a population, or otherwise suitably selected to ensure that results are representative and the amount of work is feasible for the assessment committee
Assessment materials do not include student work and/or are gathered on a volunteer or an ad hoc basis. The collected materials are either too little or too much for the committee to reasonably examine.
Assessment results are obtained by analyzing the entire population's answers to the assessment items embedded within the final exam.
Rubric scores are collected from majors with a penultimate alpha code of either 0, 2, or 4.
Institutional Research identified a random sample of 100 students in the introductory course and the department collects the final essay for scoring using a rubric.
Additional thoughts on sampling are available in the sampling white paper available on the USNA assessment webpage.
Evaluation and analysis of student work is shared by multiple faculty members and, when appropriate, procedures for improving rater agreement (inter rater reliability) are indicated
Evaluation and analysis of student work is shared by multiple faculty members.
Assessment of student work or other assessment materials takes place in isolation and/or analysis of results is primarily handled by a single individual
While the Assessment Mania event did not specifically assess student work, it brought together faculty to share ideas and create a foundation for assessment activities that support the learning the department values and also align with institutional goals. This was accomplished by revising outcomes, updating existing assessment tools, and developing new tools to better assess the learning expected by members of the department.
Target level or performance expectations are indicated for students at various points within the program reflecting expected development
Target level or performance expectations are indicated for the assessment and appear appropriate.
Criteria for different levels of performance have been indicated, but expectations are not clearly identified or are inappropriate (much too high or too low).
In this example,
The department has determined performance expectations (middle column) for students nearing the completion of a minor or major in the target language that are consistent with external standards.