To read the response for a question, click the down arrow on the right-hand side of the question to expand the text.
It’s a multi-year, multi-phased initiative to bring greater consistency and equity to the evaluation of teaching at SLU. It will focus on aligning our teaching evaluation practices with the literature on effective, responsible evaluation. It also will support us in better recognizing and rewarding teaching. In its broadest form, the project will help us to better define, document, enhance, evaluate, and recognize effective teaching in meaningful ways that align with our institutional identity.
There are many reasons to do this work, but an important one is: Because SLU faculty have asked for it. And because they've been asking for a while now.
The Faculty Senate has recommended significant change in the annual evaluation of faculty performance, and in particular, in the area of teaching evaluation. The Provost-Faculty Senate Gender Equity Committee also has identified the need for meaningful change in this area. Throughout the process of developing our new Academic Strategic Plan, faculty repeatedly identified teaching – its evaluation and valuing – as a significant focus area. And the faculty who have been engaged in our NSF ADVANCE project have echoed the calls for serious and substantive change in the evaluation of teaching. While the evaluation of teaching happens within academic units, it has become clear that we need more consistent, evidence-informed evaluation practices across the University.
Comprehensive, multi-faceted evaluation of teaching is essential to equitable faculty evaluation – and to truly valuing teaching. The University has been in a period of tremendous growth in research/scholarship, but we also continue to be a place that deeply values teaching, and our evaluation and reward practices have not always reflected this value.
In 2018, SLU adopted a University Policy on End-of-Term Student Evaluation of Courses (now the University Policy on End-of-Term Course Feedback Surveys), which makes clear that evaluation of teaching should be “comprehensive” and should not rely solely on student feedback. But the adoption of more robust, multi-faceted evaluation practices has been uneven. At an institutional level, we have not devoted the attention to this effort that we should. Thankfully, both faculty and academic leaders have been asking for meaningful change in this area for quite some time. In the last couple years, the Faculty Senate recommended changes to annual faculty performance evaluation, and faculty and academic leaders across the University said this work should be a key goal in our Academic Strategic Plan (ASP). It is, in fact, the very first item in our new ASP; that is not an accident.
This initiative is focused on moving from our current state (where teaching is evaluated in vastly different ways across the University, primarily based on student course feedback) to a comprehensive, holistic system of evaluation that is grounded in multiple sources of evidence, representing multiple perspectives, and consistent with the literature on effective teaching evaluation. The focus is, then, not primarily on student feedback (a.k.a., "student evaluations" or "student course evaluations"), although we do anticipate some changes to student course feedback as part of the larger initiative.
This work will unfold in multiple phases over several years. Faculty input and feedback will be critical every step of the way. As the initiative progresses, there will be different working groups and project teams, focused on different aspects of the work, with many opportunities for faculty to shape this work. Regular - and meaningful - faculty engagement is essential for this initiative to be successful. As is an explicit focus on and commitment to equity.
This work is a partnership between the Faculty Senate and the Provost's Office. Senate President, Chris Rollins, and Provost Mike Lewis asked Drs. Lisieux Huelman and Debie Lohe to co-lead Phase 1 of the project.
Lisieux is Associate Professor in English as Second Language and Co-Chair of the Faculty Senate’s Academic Affairs Committee. Debie serves as Associate Provost and Chief Online Learning Officer for the University. While they will lead the first phase project team, it’s important to say that all of you – all of us – will lead and own this work, together. Debie and Lisieux are committed to a transparent, participatory approach to this project.
In brief, the literature is consistent and clear about what is required for a responsible, equitable system of evaluation for teaching. To do this work well, we must articulate what we're evaluating for (that is, define "effective teaching") and develop a system of evaluation that considers multiple sources of evidence, representing multiple perspectives, with attention to formative feedback and growth over time. Institutions operationalize this work in a variety of ways, informed by their own context, but effective approaches are grounded in evidence-based understandings of what constitutes effective teaching.
Because teaching is evaluated by human beings, there are biases in all evaluation processes. While we often hear about the biases that come through student feedback on teaching, peers and administrators also have biases when evaluating teaching (including biases about specific pedagogical practices). Thus, the goal is not to achieve an "objective" measure of teaching effectiveness (which, arguably, is not possible) but to develop a system of evaluation that can mitigate the impact of biases. When we use multiple measures, representing multiple perspectives, grounded in multiple types of evidence, we are more likely to see patterns of behavior and to be better able to discern the ways in which those patterns do -- or do not -- align with the standards against which we are evaluating. In other words, triangulation of data moves us to more reliable measures.
Eventually, probably so -- and that work should begin to be factored into our collective understanding of what it takes to recognize and reward teaching in meaningful ways. As the project progresses, it will be imperative that we are honest about what the workload is, and how to account for it in faculty members' and administrators' workload assignments. If, for example, SLU develops a peer observation "corps" -- a group of trained classroom/course observers who are prepared to provide effective formative feedback on instruction -- the faculty members in that corps should receive recognition of that work (e.g., through adjusted workload assignments, professional development funds, stipends, etc.). When we get to that aspect of the project, it will be important for the University to consider effective, feasible ways to recognize that labor.
SLU’s Teaching Effectiveness Framework (once adopted and customized by department/college) will have multiple uses. It will guide individual instructors’ development and growth, reviews of teaching, tenure and promotion standards, and institutional recognitions of effective teaching.
Future phases of the Teaching Effectiveness Project will determine and articulate what these applications of the framework look like across units. Once a framework has been adopted, we anticipate a robust toolkit of materials would accompany its implementation, including resource guides, tip sheets, etc. Additionally, the Reinert Center will be able to link its various resources and offerings to elements of the adopted framework. The framework will serve as a developmental guide for less experienced instructors, as well as for instructors seeking to enhance their current practice.
Regarding the evaluation of teaching: It is important to note that decisions about how teaching will be evaluated in the future have not yet been made. Consistent with the research on responsible evaluation of teaching, we expect the (eventual) system of evaluation adopted at SLU will require multiple sources of evidence (e.g., syllabi, Canvas sites, course materials, teaching observations, student feedback surveys, etc.), drawn from multiple perspectives (e.g., instructor self-evaluation, observations and course material reviews by peers, feedback from students, etc.). Ultimately, whatever framework is adopted at SLU will need to be one for which different kinds of evidence could allow an instructor to demonstrate their effectiveness for each element of the framework element.