Course Description
Assessing learner outcomes is an essential component of any educational activity and applies to individual sessions as much as to multi-year curricula. Such assessment is critical to making decisions – decisions about improving the curriculum, advancing students, competence attainment, program evaluation and numerous others. This course will address the procedures and practices that produce high quality assessment information that can support these decisions. We will closely examine the learning goals of a curriculum and how these outcomes can be translated into measurable outcomes. We will also probe the nuances of different types of outcomes and how these differences link to preferred assessment methods. These activities will be structured around the development of an “assessment blueprint” that each course participant will design in connection with the curriculum developed
Learning Outcomes: By the end of the course, educators should be able to:
Knowledge
Explain the concepts of reliability and validity and why they are important to effective assessment.
Describe the three domains of learning (knowledge, skills, attitudes) and relevant taxonomies within them.
Distinguish between formative and summative assessment and how this distinction affects the use of assessment results.
Skills
Define effective assessment goals and an assessment blueprint
Construct effective assessment instruments pitched to selected levels within the three domains.
Identify and present evidence for the reliability and validity of specific assessment instruments.
Set appropriate standards for assessment of learning.
Evaluate and report assessment results in appropriate contexts.
Attitudes:
Recognize and describe how ethical and cultural issues might create bias in assessments and why it is important as educators to consider these.
Recognize their own biases and “comfort zones” and how these influence the choices they will make as educational assessors.
Work diligently to assure that assessments truly measure targeted outcomes for all participants.
MICU Curriculum Assessment BluePrint
Thoughts on Assessment by MICU Leadership
Reflection:
Assessment, if conducted properly, should serve multiple purposes. Determine if the learning objectives set are being met, support of student learning, certification and judgment of competency, development and evaluation of teaching programs, understanding of learning process, and predicting future performance. Therefore, this in turn will have multiple implications. Most importantly, learners will be interested in results of their assessment. Prior to taking this class my understanding of assessment was only to assess lots of facts and filling out any rubric or form on a learner without much thought of its implication and fairness. Was this the correct tool? Maybe partially true.
The Accreditation Council for Graduate Medical Education (ACGME) has always mandated evaluation of the resident learner for continued accreditation of residency programs. However, with the movement to milestone-based education and the Outcome Project, in which programs will be accredited based on patient care and learner outcomes, accurate assessment and evaluations are even more critical. In the past, residents' feedback has been based on knowledge acquisition and the learner's ability to recall key concepts as defined by the faculty. This is not valid when applied in assessing skills and milestones achieved.
In Graduate Medical Education, the ideal evaluation should be real time, relevant and practical, global rating forms will continue to have an important role in assessing residents. Identifiable issues related to resident assessments include inadequate description of evaluation criteria, variations in raters' observations and assessments, unsatisfactory or lack of meaningful feedback, and timeliness of feedback. Furthermore, assessment tools appear to lack detailed requirements of performance expectations as well as behaviors for each competency or domain. There continues to be an overemphasis on evaluating knowledge acquisition rather than measuring performance progress over time.
Rubrics have been and are gaining more recognition in medical education. Rubrics generally have 4 parts; description of the task, scale to be used, the dimensions of the task, and the description of each dimension on the scale. Overall, rubrics promote consistency in scoring, encourage self-improvement and self-assessment, motivate learners to achieve the next level, provide timely feedback, and improve instruction. Clinical evaluation remains challenging to most faculty, and rubrics provide a learner-centered assessment approach that focuses on encouraging behavioral change in learners. Performance tests are generally used to determine if a learner has mastered specific skills, and the instructor typically makes inferences about the level to which the skill has been mastered. Rubrics provide a potential solution to the subjective grading dilemma faced by clinical faculty.
Content of rubrics should be relevant to the area of training. It does not reflect by itsself the quality of performance, especially if it is prepared on a check box format. Quality in performance is a metric that we should focus on in clinical training as it is an important measure in practice. Clarity is Key to have consistent scores and reduce variability. Description of rubrics helps evaluators do it right. Practicality is also valued. If the evaluation is not self explanatory, evaluators will not evaluate.
With all factors considered, we should focus more on faculty development in order to obtain consistent and relevant evaluation on learners!
Updated @July 2015