Competency Based Assessment at UVEI

UVEI uses competency-based assessments to ensure that candidates develop the core skills of teaching through their experiences and the associated program elements. A competency is “a combination of skills, abilities, and knowledge needed to perform a specific task” (Voorhees, 2001). Competency based approaches require learning products to be defined explicitly (Voorhees, 2001).

Figure 1.1 below illustrates the UVEI conception of competency based learning models:

Macintosh HD:Users:pagetompkins:Desktop:Screen Shot 2015-02-02 at 11.36.20 AM.png

The foundational level relates to traits and characteristics. These constitute the foundation for learning and depict the innate makeup of individuals on which further experiences can be built (Voorhees, 2001). UVEI has an in-depth system and philosophy, both for selection and orientation, that is focused on core faculty learning about new interns in-depth, allowing for learning to happen in personalized ways.

The second rung consists of skills, abilities and knowledge. These are developed through learning experiences, broadly defined to include work and participation in educators’ authentic communities of practice (Lave & Wenger, 1991; Voorhees, 2001; Wenger, 1998). As described above, at UVEI this learning occurs through authentic experience, framed and understood through carefully structured cycles of inquiry.

Competencies are the result of integrative learning experiences in which skills, abilities and knowledge interact to form learning bundles that relate to the task for which those skills and abilities are intended (Voorhees, 2001). At UVEI, these competencies are defined by national standards (InTASC and ISLLC, respectively).

Finally, demonstrations are the results of applying competencies. It is at this level that performance-based learning can be assessed (Voorhees, 2001).

Assessment of Competency:

Competency-based models ultimately rely on measurable assessment. In other words, if a proposed competency cannot be described unambiguously and subsequently measured, it probably is not a competency. Given these fundamental attributes, all parties to the learning process—faculty, external experts, administrators, and students—must be able to understand with clarity the outcomes of the learning experience. Under these circumstances, competencies are transparent. Learning outcomes hold no mystery, and faculty are freed from the burden of defending learning outcomes that are verified only by professional judgment (Voorhees, 2001). At UVEI, competencies are defined by a set of indicators and are measured by multiple measures, each of which has different strengths and weaknesses.

In teacher education, however, a tension exists between clarity of the assessment of competency and the inherent complexity of teaching and learning. Multiple measures are important for ensuring that competency-based systems of assessment are responsive to this tension (Darling-Hammond, 2006a). At UVEI, assessments include a set of performance assessments specifically designed for assessing particular indicators of competency (Darling-Hammond, 2006a), observations of clinical practice (Darling-Hammond, 2006a), a combination of analytical and holistic evaluation of evidence based on a portfolio (Ryan & Kuhs, 1993; Zidon & Greves, 2002), and examining student work products (Darling-Hammond, 2006a). Each of these aspects of UVEI’s measures of competency will be discussed below.

Performance Assessments: Performance assessments are designed to look at what teachers can actually do before they begin to teach, rather than using seat time, course credits, or paper-and-pencil tests alone (Educator Excellence Task Force, 2012). Performance assessments at UVEI, whenever possible, are developed from valid and reliable assessments, include common rubrics and criteria, incorporate opportunities for calibration amongst the faculty, and are intended to give clear evidence of teacher and leaders’ competencies (thus satisfying the “clarity” criterion for effective competency based structures discussed above). High quality performance assessments have also been associated with aiding in faculty members planning with the end in view and calibrating all faculty towards agreed competencies (Whittaker & Nelson, 2013).

While UVEI uses an array of performance assessments, UVEI is a partner in the New Hampshire IHE network and has been collaboratively developing the Teacher Common Assessment of Performance (TCAP), which is based on the highly regarded Performance Assessment for California Teachers. UVEI has used this performance assessment, and sub-components of the assessment, as the cornerstone of our performance assessment system. The TCAP provides UVEI a means to evaluate elements of teaching skill systematically and authentically within our program (Darling-Hammond, 2006a).

The implementation of the Performance Assessment for California Teachers (PACT) has been researched somewhat extensively and has been found not only to measure features of teaching associated with effectiveness, but actually to help develop effectiveness at the same time. (Darling-Hammond, Wei, & Johnson, 2009; Merino & Pecheone, 2013; Pecheone & Chung, 2006). Furthermore, the research finds that teacher candidates’ PACT scores are significant predictors of their later teaching effectiveness as measured by their students’ achievement gains (Darling-Hammond, Newton, & Wei, 2013).

In addition to its usefulness as a measure of teacher competency, the PACT has been found to have informed participating programs about how they can better support candidate learning and identify areas for examination and for program improvement (Merino & Pecheone, 2013; Peck & McDonald, 2013). UVEI has, we believe, effectively used the TCAP for all of these purposes.

For all of these reasons, UVEI uses performance assessments (whether from the TCAP or other sources), as a key element of our competency based assessment system.

Longitudinal Observations of Clinical Practice: Interns in both the teacher and principal programs are regularly observed, informally, in practice as part of ongoing coaching cycles. However, both programs include formal observations based on research-based frameworks for practice.

Detailed rubrics for Faculty Coaches to use in evaluating intern teachers progress, with clear criteria, are embedded in extensive (year long) clinical settings, allowing for observations to assess as both a candidate’s learning-in-progress and, by the end of the year, as a measure of summative competency resulting from the preparation program (Darling-Hammond, 2006a).

The teacher program includes four formal observations based on the Danielson Framework (two each associated with classroom environments and instruction, respectively). The Danielson Framework was chosen because of its pervasiveness nationally and because of evidence that evaluation based on the Danielson Framework is correlated with a host of desirable outcomes, including increased student learning (Kane & Staiger, 2012).

The field of principal preparation does not have equivalently validated observation instruments. However, the Principal Internship Program includes two observations based on research-based instruments. The first is an instrument developed by the New Teacher Center (2005) designed to assess instructional leaders’ ability to facilitate learning conversations amongst teachers, and the second is an instrument developed at the University of California Berkeley as part of a federally funded design study. The instrument was designed to evaluate the effectiveness of instructional conferences (P. Tompkins, Mintrop, & Wayne, 2013).

Analyzing Candidate Work: Beyond performance assessments, the program includes multiple opportunities for faculty to examine student work more formatively, as it is embedded in teacher and leaders’ inquiry cycles. UVEI has found that this collaborative exploration of work in progress (a collaboration that usually emphasizes collegial analysis of work), and the opportunity for interns to revise and further develop their submissions, allows evaluators to see interns’ development over time, and to have a more complete picture of their emerging competency (Darling-Hammond, 2006a).

Additionally, UVEI analyzes candidate work that is selected by the candidates (but may not be a required element or assignment). Examples include, for example, lesson plans, samples of student work, or other artifacts developed in practice.

Portfolios: Professional portfolios provide a tool for assessing pre-service students' progress in developing the myriad complex skills necessary for effective teaching. Used effectively, portfolios can promote reflective practice, increased self-confidence, prepare candidates for the job search, and heighten candidates awareness of professional standards (Willis & Davies, 2002). At UVEI, portfolios incorporate all of the forms of evidence discussed above.

UVEI uses a modified holistic approach in evaluating portfolios, which combines analytic and holistic approaches, and has been founded to be especially useful (Ryan & Kuhs, 1993). The performance assessments and clinical observations, described above, are scored using specific criteria. Additional evidence is selected and included as appropriate. Then the evaluator assigns a single, integrated (holistic) score based on the indicators and considers both the analytical data and the preponderance of the evidence.

The use of a modified holistic approach in the review of pre-service teachers' portfolio assessment information is beneficial because it obligates faculty to discuss and agree on what they expect to see when they examine students' assessment information. The modified holistic approach provides an integrated, constructed, more qualitative picture of the prospective teacher's portfolio data rather than reducing each individual to a series of discrete quantitative scales. Such a picture is more faithful to the complexity and integrative nature of teaching and supports the need for flexibility to effectively and efficiently operate a pre-service teacher assessment system (Ryan & Kuhs, 1993).

In sum, there are several advantages of competency based assessments for learners in experiential learning models. Because learning can be described and measured in ways that are understood by all parties, competencies permit the learner to return to one or more competencies that have not been mastered in a learning process rather than face the unwelcome prospect of repeating one or more traditional courses. Competency based assessments also provide students with a clear map and the tools needed to move toward established goals (Voorhees, 2001).

Effective programs that use competency based assessment systems share several critical characteristics (Voorhees, 2001), all of which we believe are well integrated at UVEI:

    • Senior leadership of the organization are the public advocates, leaders and facilitators for creating an institutional culture that is open to change, is willing to take risks, and fosters innovations by providing real incentives for participants.
    • The appropriate stakeholders (faculty, coaches, mentors and participants) participate in identifying, defining, and reaching consensus about important competencies.
    • Competencies are clearly defined, understood and accepted by relevant stakeholders.
    • Competencies are defined at a sufficient level of specificity that they can be assessed.
    • Multiple assessments of competencies provide useful and meaningful information that is relevant to the decision-making and policy development contexts.
    • Faculty and staff participate in making decisions about the strongest assessment instruments that will measure their specific competencies.
    • Precision, reliability, validity, credibility and costs are all considered and examined in making selections of the best commercially developed assessments or locally developed assessment approaches.
    • The competency-based educational initiative is embedded in a larger institutional planning process.
    • The assessments of competencies are directly linked with the goals of the learning experience.
    • The assessment results are used in making critical decisions about strategies to improve student learning.
    • The assessment results are clear and are reported in a meaningful way so that all relevant stakeholders understand the findings.
    • The institution experiments with new ways to document students’ mastery of competencies that supplement the traditional transcript.

Sources:

Darling-Hammond, L. (2006a). Assessing Teacher Education: The Usefulness of Multiple Measures for Assessing Program Outcomes. Journal of Teacher Education, 57(2), 120-138. doi: 10.1177/0022487105283796

Darling-Hammond, L. (2006b). Constructing 21st-Century Teacher Education. Journal of Teacher Education, 57(3), 300-314. doi: 10.1177/0022487105285962

Darling-Hammond, L., Newton, S. P., & Wei, R. C. (2013). Developing and assessing beginning teacher effectiveness: the potential of performance assessments. Educational Assessment, Evaluation, and Accountability, 25(3).

Darling-Hammond, L., Wei, R. C., & Johnson, C. M. (2009). Teacher preparation and teacher learning: A changing policy landscape. In G. Sykes, B. Schneider, & D. Plank (Eds.), Handbook of Education Policy Research. New York: Routledge Publishers.

Educator Excellence Task Force. (2012). Greatness by design: Supporting outstanding teaching to sustain a golden state. Sacramento, CA: California Department of Education

California Comission on Teacher Credentials.

Kane, T., & Staiger, D. (2012). Gathering feedback for teaching: Combining high-quality observations with student surveys and achievement gains: Bill & Melinda Gates Foundation.

Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. New York: Cambridge University Press.

Merino, N., & Pecheone, R. (2013). The Performance Assessment for California Teachers: An introduction. The New Educator, 9(1).

New Teacher Center. (2005). Contiuum of mentor development.

Pecheone, R. L., & Chung, R. R. (2006). Evidence in teacher education: The perfromance assessment for California teachers (PACT). Journal of Teacher Education, 57(1), 22-36.

Peck, C., & McDonald, M. (2013). Creating "cultures of evidence" in teacher education: Context, policy, and practice in three high-data-use programs. The New Educator, 9(1).

Ryan, J. M., & Kuhs, T. M. (1993). Assessment of Preservice Teachers and the Use of Portfolios. Theory into Practice, 32(2), 75-81. doi: 10.2307/1476322

Voorhees, R. (2001). Competency-based learning models: A necessary future. New Directions fo Institutional Research, Summer 2001(110).

Wenger, E. (1998). Communities of practice: Learning, meaning, and identity. New York: Cambridge University Press.

Whittaker, A., & Nelson, C. (2013). Assessment with the "end in view". The New Educator, 9(1).

Willis, E. M., & Davies, M. A. (2002). Promise and Practice of Professional Portfolios. Action in Teacher Education, 23(4), 18-27. doi: 10.1080/01626620.2002.10463084