The Phase 2 Project Team of the joint Faculty Senate/Provost’s Office Teaching Effectiveness Project now shares final Guidelines on Holistic Evaluation of Teaching and Rubric Templates for use in Phase 3 pilots, if moving to a pilot phase is supported by the Faculty Senate and the Council of Academic Deans and Directors (CADD).
These materials – and the results of Phase 3 activities beginning in the 2026-2027 academic year – will inform the future implementation of a holistic system of teaching evaluation at SLU. Earlier drafts of these materials were shared with the SLU community for feedback and discussion earlier this spring. To see a high-level summary of the feedback, and actions taken (if any), see the Summary Report: Feedback on Guidelines and Rubric Templates.) A print-friendly version of the content on this webpage is available in PDF.
At this time, we are seeking endorsement from Faculty Senate and CADD to move these materials to a pilot phase with academic units [1] that choose to participate. The goal for the pilot would be to learn how the guidelines and templates work in practice, make any needed adjustments based on lessons learned, and begin building a set of resources and example banks that subsequent schools/departments could use when developing their own rubrics and implementation plans. Phase 3 also will include targeted work on Student Feedback and Peer Feedback, as described below.
[1] We use the term “academic unit” to indicate a school, college, department, or program. Some schools do not have departments; others may want a college-wide approach or push it to departments. The idea is that the “unit” developing customized materials and approaches to evaluation ought to be at the level in which evaluation of teaching and faculty work occurs. So, if a Dean typically evaluates faculty on their teaching, the “unit” likely would be the school level. If a Chair typically evaluates faculty on their teaching, the “unit” likely would be the department. Ultimately, academic units need to determine this for themselves, since no single definition will work for all contexts.
Historically, at SLU as at many other institutions, the formal evaluation of teaching has over-relied on end-of-term student feedback. The joint Faculty Senate/Provost Teaching Effectiveness Project was launched to support more holistic, evidence-based approaches to evaluating teaching across the University.
Since its inception, this Project has consistently been guided by several key commitments:
SLU’s Catholic, Jesuit identity. Our institutional values and identity inform all three dimensions of SLU’s Teaching Effectiveness Framework. Jesuit education is human-centered and prioritizes learning that transcends technical knowledge.
Research-informed teaching. The research on effective teaching and learning is robust and includes published work embedded within all academic disciplines, as well as published work that transcends disciplinary difference (including research from the learning sciences). All the Essential Practices articulated in SLU’s Teaching Effectiveness Framework are supported by research, including those listed as Mission-Aligned practices. Using research-informed teaching practices as the foundation of our system of teaching evaluation – while also creating space for faculty to try new things, drawing on evidence from their own students and courses to understand if/how those new approaches are working – helps to ensure SLU students experience consistently high-quality instruction.
Research-informed evaluation. The research on responsible, holistic evaluation of teaching is now well-documented. Adopting research-informed evaluation practices helps to ensure fairness and equity in faculty performance evaluation. It also helps to reduce the potential impacts of human biases and to raise the visibility of teaching-related work that often falls disproportionately to women faculty and faculty of color.
The highly contextualized nature of teaching. Both the research literature and Ignatian pedagogy make clear there is no one-size-fits-all approach to teaching and no specific instructional methods will be right for all contexts. There are evidence-based features of effective teaching that all instructors should adopt; many of these appear as Essential Practices in SLU’s Teaching Effectiveness Framework. However, the ways in which each instructor demonstrates those Essential Practices will vary, based on discipline, students, teaching philosophies, course levels, course types, and other contextual factors (including instructors’ own development as teachers).
A comprehensive and growth-oriented conception of the work of “teaching.” Teaching is much more than what happens within the confines of a classroom or an online course, and the work of becoming an effective teacher is never complete. Thus, SLU’s Teaching Effectiveness Framework, and the guidelines for holistic evaluation of teaching outlined here, includes the often-invisible work of teaching (such as course design and preparation) and prioritizes instructor development over time.
In Phase 1 (2025-2026), SLU’s Teaching Effectiveness Framework was developed through a robust faculty-engagement process, endorsed by the Faculty Senate and the Council of Academic Deans and Directors (CADD), and approved by the Provost in spring 2025. The Framework prioritizes the kinds of evidence-based, mission-aligned teaching that supports student success (a key focus of President Feser’s new strategic plan). It also lays the groundwork for more equitable and responsible evaluation of teaching.
Phase 2 (2026-2027) has focused on developing the parameters for – and rubric templates to support – a holistic system of teaching evaluation. This work has been iterative. Draft guidelines were circulated in February 2026, with community feedback informing the final guidelines and draft rubric templates shared with the SLU community in March 2026, for an additional round of feedback.
Based on the actionable feedback collected, the team has made only minor revisions to the materials presented here. (See the Summary Report: Feedback on the Final Guidelines and Rubric Templates to learn more about the feedback on the most recent drafts and the team’s responses to it.) Most of the community’s response to the Guidelines came in the form of questions, particularly questions about how the Guidelines and Rubric Templates might be operationalized. Below, we share the Final Guidelines and Rubric Templates which will inform a pilot phase (beginning in fall 2026), if the Faculty Senate and CADD endorse moving to a pilot phase.
Since its inception, the Teaching Effectiveness Project has been committed to addressing fundamental issues in historical practices of teaching evaluation, including a lack of fairness and equity in how teaching is evaluated at SLU. The team believes moving away from an over-reliance on student feedback surveys and adopting the evidence-based approach described in this document will substantially increase fairness and equity in the evaluation of teaching for all faculty. The feedback from the community supports this view, and we look forward to seeing how the guidelines and rubric templates work during a pilot phase.
The fundamentals of holistic, responsible teaching evaluation are well-documented. [2] Holistic evaluation of teaching aims to arrive at a comprehensive view of one’s teaching, informed by a range of perspectives and multiple sources and types of evidence, for a given review period. Furthermore, during formal review processes, a holistic evaluation should consider both the degree of one’s effectiveness as a teacher and the extent to which one meets performance expectations for teaching, as established by the academic unit/leader.
These are two different, but connected, layers. For instance, a faculty member who is experimenting with a new approach to teaching might find that it takes several semesters to hone the effectiveness of the new approach. However, in a performance evaluation process (e.g., annual review), the fact that the faculty member attempted something new may well meet or even exceed expectations. Thinking about assessments of effectiveness as related to, but different from, performance evaluation helps to create protected space for faculty to experiment with new approaches in their teaching without a built-in negative impact on performance evaluation.
For full-time faculty members, the evaluation of teaching is only one component of their annual or periodic (mid-point, promotion, tenure) evaluations. And for many faculty, their assigned teaching workload includes a range of activities [3], not just course-based instruction.
Figure 1 aims to show the relationships between these elements of evaluation context.
Of course, any faculty member also may receive informal and formative feedback on their teaching, either as part of a larger review process or as part of an individual development plan. (For more on the differences between evaluation of a faculty member and evaluation of their teaching, see Appendix A below.) While the review context for other instructors (e.g., adjunct/part-time faculty, graduate students, and teaching staff) looks different, the overarching features of holistic evaluation still apply.
[2] See the Primer on Holistic Evaluation of Teaching on the Teaching Effectiveness Project website.
[3] We recognize that some of the activities listed under Evaluation of Teaching in Figure A may fall under Service for many faculty, but in cases where these activities do fall under Teaching, they should be considered when a chair or dean is providing a summative evaluation of the faculty member’s teaching performance.
Formal evaluations of teaching will look different in different contexts, depending on the instructor’s appointment type (e.g., full-time faculty, part-time/adjunct faculty, graduate students, staff), the purpose of the evaluation being conducted (e.g., annual review, promotion review), and the cadence of review (e.g., by term, by academic year, over several academic years).
Minimally, formal evaluations of teaching for all full-time faculty (and for all other instructors, to the extent reasonable) must:
Be grounded in SLU’s Teaching Effectiveness Framework for course-based instruction.
The framework articulates a set of Essential Practices that all instructors should strive to adopt to some degree. As such, it represents a set of shared priorities for teaching and provides a foundation for consistency in teaching evaluation across the University. However, as the Template for Rubric #1 (below) makes clear, effective expressions of the Essential Practices will look different in different disciplines.
Consider multiple lenses on teaching, including those of the instructor, their students, and (periodically) their peers.
Including multiple lenses on teaching ensures evaluators have a more comprehensive view of one’s teaching and helps to mitigate the impact (positive or negative) of biases that are present in any single perspective on teaching. Note: While peer feedback always has the potential to be useful, peer feedback is not required for annual review, unless an academic unit determines otherwise. Appendix B shows some of the benefits each lens can provide, as well as the types of evidence that might be drawn from each.
Consider multiple forms and types of evidence.
Formal evaluations of teaching should consider evidence from each of the lenses described above and gathered over multiple courses and terms (unless the instructor only teaches a single course within a given evaluation period). Appendix B shows some types of evidence that might be drawn from instructors, students, and peers as part of a holistic evaluation process.
Expect and support Instructor growth and development over time.
While formal evaluations of teaching typically serve summative purposes (i.e., reaching an evaluative conclusion for the purposes of personnel decisions), even those evaluations should support formative purposes (i.e., contribute to individual instructors’ growth and development as teachers). Development over time is expected in SLU’s Teaching Effectiveness Framework, and there should always be feedback instructors can learn from as part of any formal evaluation process, even when the primary focus of an evaluation is summative.
How these expectations are operationalized will vary by unit and by review type (see below for more).
For full-time faculty in all ranks and appointment types, evaluation of teaching performance should consider both the course-based and non-course-based activities that comprise their workload assignment for a given evaluation period. See Appendix C (below) for examples of common course-based and non-course activities that may fall under a faculty member’s assigned teaching workload. The Overarching Guidelines outlined above should inform both annual and periodic (mid-point, promotion, tenure review) evaluations, though the focus and methods of evaluation likely will differ. Below are some key ways we anticipate annual and periodic evaluation of teaching will differ.
In addition to meeting the overarching guidelines above, annual evaluation of full-time faculty should:
Consider both course-based instruction and (any assigned) non-course teaching activities for the year
For course-based instruction:
a. Consider all courses taught that year
b. Focus on one or more elements of the Teaching Effectiveness Framework, as agreed-upon in annual goal-setting by the faculty member and chair/dean.
c. On an annual basis, a faculty member may opt to work on specific Essential Practices, or on one of the three dimensions. Faculty are not expected to actively work on all elements of the framework on an annual basis. Note: over time, faculty should eventually receive chair/dean feedback on all aspects of the framework.
d. Consider the faculty member’s effectiveness as a teacher using a unit-established rubric. [See Template for Rubric #1 below.]
e. Evaluate the faculty member’s alignment with specific performance expectations as established by the chair/unit and faculty member for that year, with attention to relevant contextual factors (e.g., number of courses taught, level of courses taught, number of new vs. existing courses taught, etc.). [See Template for Rubric #2 below.]
For non-course teaching activities:
a. Consider all non-course teaching workload activities for that year
b. Evaluate these activities against specific expectations established by the chair/unit and faculty member for that year [See Template Rubric #2 below.]
Annual review involves the chair/dean drawing evaluative conclusions by considering the totality of evidence, across course-based and non-course teaching activities, for a given year, as well as the specific context of the faculty member’s annual goals (as agreed-upon with the chair/dean). At the same time, annual reviews also should provide formative feedback to guide future growth and development.
In addition to meeting the overarching guidelines above, periodic (mid-point/tenure/promotion) evaluation of full-time faculty should:
Consider both course-based instruction and (any assigned) non-course teaching activities for the review period
For course-based instruction:
a. Consider all courses and terms taught during the review period
b. Address all elements of the Teaching Effectiveness Framework. Note: for mid-point evaluation, formative feedback should indicate areas of improvement/growth needed.
c. Consider the faculty member’s effectiveness as a teacher using a unit-established rubric. [See Template for Rubric #1 below.]
d. Evaluate the faculty member’s teaching against specific expectations established by the unit for the given level of promotion, with attention to relevant contextual factors (e.g., number of courses taught, level of courses taught, number of new vs. existing courses taught, etc.) [See Template for Rubric #2 below.]
For non-course teaching activities:
a. Consider all non-course teaching workload activities for all terms of the review period
b. Evaluate these activities against specific expectations established by the unit for a given level of promotion [See Template Rubric #2 below.]
Periodic review involves a broader evaluation of the extent to which the faculty member meets expectations for promotion/criteria. It also should consider the totality of evidence, across course-based and non-course teaching activities, for the given evaluation period. At the same time, mid-point and promotion review also should provide formative feedback to guide future growth and development.
For instructors who are not full-time SLU faculty, evaluation will necessarily look different. The cadence and focus of teaching evaluation in these cases will vary.
Adjunct/part-time faculty and non-faculty instructors (such as graduate student instructors/TAs [4] and staff members) often have more limited agency in their teaching and/or may teach only one course or teach only infrequently. Other types of instructors – such as those teaching SLU students in clinical settings or those training SLU students on flight simulators, for example – teach in highly-individualized, highly-specialized learning contexts for which the Essential Practices of the Teaching Effectiveness Framework may not be fully applicable.
Certainly, in many of these situations, it will not be possible to evaluate teaching in ways that fully align with the expectations listed above. However, for course-based instruction, academic leaders (whoever oversees the instructor) should do their best to ensure the following expectations are met, to the extent possible:
The expectations for effective teaching should be clearly articulated and align with the Teaching Effectiveness Framework. [See Template for Rubric #1 below.]
All instructors should receive formative feedback on their teaching, on a regular basis (if applicable). For graduate student Teaching Assistants, this should include direct feedback from a faculty supervisor, program director, course coordinator, chair, or dean whenever possible.
In cases where evaluative judgments will inform personnel decisions – for instance, whether or not an instructor will be invited to continue teaching at SLU – such judgments should be based on multiple sources of evidence, representing multiple lenses on teaching, and across multiple courses/terms (if applicable). Consistent with existing policy, student feedback cannot be the “sole measure” of the instructor’s performance for the purposes of personnel decisions.
[4] It is important to note that these guidelines refer to graduate student instructors/Teaching Assistants who serve as the main course instructor. The guidelines are not appropriate for graduate students who support a faculty member as main course instructor or for other “teaching assistant” like roles, such as undergraduate TAs, undergraduate Learning Assistants, student Peer Mentors, and the like.
Understanding how the guidelines above will be operationalized is critical to moving forward with large-scale adoption of this type of evaluation system. There is much we cannot know, in the abstract, about what this might look like on the ground and what kinds of additional resources, workload credit, training, and more would be needed to make it work efficiently.
The Phase 2 Team expects that the guidelines will be operationalized differently in different contexts. That is why the team is recommending the next step in this initiative be a pilot phase in which a small number of academic units develop customized rubrics (see Phase 3 Overview and Anticipated Pilot Activities below), create plans for implementation, and identify other operational matters that must be refined before a broader rollout. Pilot groups would help to clarify a number of issues SLU faculty have already identified as critical to this project’s success, including articulating workload considerations, identifying an efficient accountability model, and more. (See the section on Phase 3 activities below for more on the kinds of questions we will seek to answer during a pilot phase.)
To create an element of consistency – while also allowing for significant disciplinary/unit-level customization – the two Rubric Templates will serve as a starting point for operationalizing the guidelines.
The Rubric Templates are designed to be populated by academic units. The templates address two different, but related, aspects of holistic teaching evaluation: assessing effectiveness (Rubric #1 template) and evaluating performance (Rubric #2 template). Table 1 summarizes the intended relationship between the rubrics. Figure 2 offers a visual representation of how assessment of effectiveness contributes to the holistic evaluation of teaching, which contributes to the holistic evaluation of the faculty member.
The templates offer some elements of consistency, but the expectations in both are ultimately defined by academic units. This approach addresses concerns raised by SLU faculty about the need to ensure an appropriate balance of consistency across the institution and disciplinary autonomy within academic units.
It is important to note that academic units already have criteria for evaluating faculty teaching performance, but they may not all be written down in ways that promote fair and equitable evaluation practices. Even in cases where academic units have clearly articulated expectations for teaching, those expectations now need to be reconciled with SLU’s adopted Teaching Effectiveness Framework.
Phase 3 (beginning in 2026-2027) will involve different groups working on different aspects of the Teaching Effectiveness Project. In early fall 2026, we will issue calls for faculty to participate in various Phase 3 activities described below.
Consistent with the literature, student feedback will remain a component of holistic teaching evaluation at SLU, though it will play a different role in formal evaluations than it has done previously. While evaluation practices have relied almost solely on the Blue End-of-Term Course Feedback Surveys, many faculty have long incorporated other forms of student feedback in their own teaching practice (such as mid-semester feedback surveys, notecard activities asking students for items to start-stop-change, and/or the Reinert Center’s Small Group Instructional Feedback (SGIF) sessions). Additionally, since the Teaching Effectiveness Project’s inception, the project teams have consistently received questions about, and feedback on, the Blue Surveys.
Looking ahead to Phase 3, we anticipate at least four kinds of work under the broad umbrella of student feedback on teaching:
Raising awareness about the many ways instructors can solicit and engage with student feedback on teaching. This will involve working with the Reinert Center for Transformative Teaching and Learning to raise awareness of SGIF sessions and to develop other resources that highlight different ways, and purposes for, soliciting meaningful feedback from students about their course-based learning experiences. In this phase, it also will be important to understand the potential impact support resources at SLU, including the Reinert Center, if there is increasing demand for their services in this area.
Enhancing efforts to better support students in providing useful feedback on teaching. In the spring of 2024, the Provost’s Office published a Student Guide on End-of-Term Course Feedback Surveys (it is available in multiple formats to promote ease-of-engagement, including promoting the ability of instructors to incorporate it into their own courses). Framed by the components of the Ignatian Pedagogical Paradigm, the guide provides tips for students on engaging in this feedback process in ways that align with SLU’s values. Throughout the Teaching Effectiveness Project, teams have heard that additional education for students would be helpful.
Determining how faculty and administrators can responsibly incorporate student feedback into the holistic evaluation of teaching. This work will involve developing resources/trainings to ensure responsible use of student feedback going forward. It also will involve pilot groups working through important questions about the role and place of student feedback as they operationalize the shift to a holistic evaluation of teaching. Some questions the pilot groups will seek to answer are:
In addition to the Blue Surveys, what other sources of student feedback might be appropriate within the unit? All students enrolled in SLU courses are invited to complete the Blue Surveys, but these are only one form of student feedback [see Appendix B]. The Phase 3 Project team(s) will guide pilot groups as they decide on more options for collecting student feedback.
Where does/might student feedback serve as evidence in the rubrics? As pilot groups develop, test, and revise their rubrics, where is student feedback most visible? Most useful? How should student feedback not be used when assessing effectiveness (Rubric #1) and evaluating performance (Rubric #2)?
How will student feedback be considered alongside self-reflection and peer feedback?
Updating Blue Surveys and related processes. This work will be done in partnership with those in the Provost’s Office who oversee the Blue system (Steve Sanchez and Marissa Cope), who have been holding off on initiating conversations about updates until the Teaching Effectiveness Project established clear expectations for holistic evaluation of teaching. Aside from adding (in winter 2023) a research-validated introduction to the surveys that aims to reduce the impact of biases in student responses, the Blue survey instrument has not been updated since 2018-2019. [5] From the inception of the project, the Co-Chairs have been collecting suggestions, questions, and concerns about the Blue Surveys. That data will inform the work of Phase 3. As part of Phase 3 of this project, then, it will be important to address key questions about the next iteration of the surveys, including:
How does/should the Blue Survey align with SLU’s Teaching Effectiveness Framework? How does/should it fit into the overarching system of holistic teaching evaluation?
What changes need to be made to the Blue Survey instrument itself (questions, framing, etc.) to make the survey an effective tool for collecting student feedback as part of a holistic system of evaluation? Proposed changes may include things coming out of emerging research or well-documented pain points, but they also should be based on the place/role of the Blue Surveys in the broader system of evaluation.
What changes need to be made to the implementation of the Blue Survey (timing, course types, etc.)? Proposed changes may be needed to address faculty concerns about response rates, timing of survey openings/closings, and more.
Peer feedback on teaching can take many forms. Different institutions approach this in different ways. For example, some institutions leave this wholly to individual departments, while others create an institution-wide “peer review corps” that can serve faculty in any discipline. For SLU, no decisions have been made about the form or nature of peer feedback. In general, we expect to leave to academic units the details of what might work best for them, but we also anticipate SLU might also develop a peer review corps for units that want to draw from a broader pool of faculty for peer feedback.
Using feedback and suggestions the projects teams have already received (and will continue to receive), Phase 3 will begin to plan for what peer feedback on teaching at SLU might look like. This work must be informed by the Phase 3 pilot groups, as well. Some of the questions that will need to be addressed include:
What counts as peer feedback? The Phase 2 Team has repeatedly noted that peer feedback need not be a formal peer observation of a live class period. The Phase 3 team(s) will develop parameters and guidance on different forms of peer feedback and how they might be used as evidence in the holistic evaluation of teaching. Pilot groups also will help to identify how peer feedback might be aligned to the rubrics they are creating and testing. No decisions about the forms of feedback have been made in advance of the next phase, and there likely will not be a single answer that works for all contexts.
Who can/should provide peer feedback (and when)? There are many definitions of “peer” in the literature on the holistic evaluation of teaching [see Appendix B]. These can include colleagues within one unit, colleagues within the university, and colleagues at other institutions. The Phase 3 team(s) will consider the benefits and challenges of these groups to make recommendations for this process at SLU, but no decisions about who will provide this feedback have been made in advance of the next phase, and there likely will not be a single answer that works for all contexts.
What kind of training/support would peers need to provide effective feedback? SLU faculty have repeatedly indicated that it will be important to offer development and support for faculty peer reviewers. While no decisions have been made about what this might look like at SLU, there are good models we can look to at other institutions, and the Reinert Center already works with groups on effective strategies for peer feedback on teaching.
How can we reduce bias in peer feedback? As with any kind of feedback or evaluation, there will be bias in peer feedback. The Phase 3 team(s) working on peer feedback will consider strategies and methods for mitigating bias in peer feedback. These might include a toolkit like the Student Guide and/or a series of training sessions for peer reviewers. No decisions about the mitigation efforts have been made in advance of the next phase. The Phase 3 team will also consider what accountability measures are needed and what form that accountability structure would take.
Where/how will peer feedback on teaching be accounted for in faculty workload? This is a very common question from SLU faculty who are engaged in this project, and the Phase 3 team(s) addressing peer feedback will need to make recommendations for this important topic. While no decisions have been made about this, it is likely that this will look different in different units. (For instance, in some colleges, there may be a group of faculty who serve as peer reviewers for an academic year and who have that work recognized in their Service workload.) Minimally, there should be clear expectations for, and accounting of, peer feedback somewhere in faculty members’ workloads.
Perhaps the most important work of Phase 3 will be the pilots to begin operationalizing the recommendations made in these Guidelines and to test the Rubric Templates. The Phase 2 Team has, and will continue to, collect community feedback that informs the work of the pilot groups in Phase 3. The team anticipates that the pilot groups will vary in size, structure, and discipline, and they will represent a variety of faculty roles. Note: academic units interested in participating in the pilot phase should complete the Phase 3 Pilot Interest Form.
The groups will develop their own rubrics, using the templates provided in this document, and they will develop plans for operationalizing the Guidelines. They will also determine how to operationalize their own rubrics as part of the pilot. The team recognizes that units participating in the pilot will each need to work at a pace suitable for their own context and that the work of operationalizing the rubrics may extend beyond the 2026-2027 academic year.
Some of the questions these groups will consider include:
What are the acceptable forms of evidence and how much should be provided?
Will Rubrics #1 and #2 be developed at the same time?
What will the operationalizing of the rubrics look like? Will they involve narratives, drop-down menus, annotations, etc.?
How will rubrics be used for joint-appointed faculty?
How does quantitative data regarding teaching workload get considered (in practice)?
In what ways, if any, does this work put more scrutiny on teaching-focused faculty and/or have an outsized impact on their ability to be promoted?
What training and support will chairs and deans need? What form might these trainings take? How often will they happen?
What kinds of support and development do academic units need to engage in this work? (And what are the potential resource implications for the Reinert Center and other campus resources?)
How does this holistic system of evaluation, and the tools supporting it, promote or inhibit teaching innovation?
What else do units need to adjust in order to support this approach? Do units need to revise workload and P&T guidelines as a result of this work? In what ways?
What, if any, inequities become visible as this work begins to be operationalized and tested?
What kinds of accountability mechanisms should be in place to ensure fidelity to the spirit of these Guidelines? Where should that mechanism be housed? (For example, a Faculty Senate committee? Something else?)
Does this create a system of responsible, holistic evaluation that is transparent, feasible, equitable, and sustainable?
For those individuals and units not engaged in pilot activities, there are other ways to begin grounding teaching and teaching evaluation in the work of the Teaching Effectiveness Project, for those who wish to do so. Here are a few possibilities:
Individual faculty/instructors can begin grounding their own self-evaluation practices in SLU’s Teaching Effectiveness Framework. There is already a “worksheet” on the TEP website to support instructors in this kind of activity.
Individual faculty/instructors can continue to develop their own teaching in alignment with the Framework by participating in development opportunities offered by the Reinert Center. The Reinert Center website now tags events and digital resources with the dimensions of the Framework, to make it easier for SLU instructors to see how they might target their teaching development to the Framework’s dimensions and Essential Practices.
Departments, schools, and colleges can begin conversations about the ways in which they will prioritize elements from the Framework. They also may wish to work with the Reinert Center on customized workshops and other programming that features the Framework’s Essential Practices.
Programs with graduate Teaching Assistants may wish to begin grounding teaching expectations (and perhaps even formative feedback on teaching) in the Framework.
For academic units that already have a teaching evaluation rubric or other criteria for teaching evaluation, they could begin discussions about how those materials “map” to the Teaching Effectiveness Framework and/or the Guidelines for Holistic Teaching Evaluation, to identify starting points for the work of aligning their practices and criteria with the expectations articulated here.
Any of these activities would help to lay the groundwork for future implementation for faculty and academic units that want to begin moving toward holistic evaluation grounded in SLU’s Framework but are not able to participate in the pilot.
[5] In 2018, the Provost’s Office led a broad faculty engagement process to determine which changes needed to be made. The changes at that time were relatively minimal based on direction from SLU faculty members.