The Phase 2 Project Team of the joint Faculty Senate/Provost’s Office Teaching Effectiveness Project is pleased to share its final Guidelines on Holistic Evaluation of Teaching and a set of DRAFT Rubric Templates (offered as proof-of-concept prototypes) to support the holistic evaluation of teaching at SLU. These materials are being shared with the SLU community for feedback and discussion. They are informed by community feedback on the DRAFT Guidelines earlier this spring. (See the Summary Report: Feedback on DRAFT Guidelines for more.) You may find a print-friendly PDF of the content on this page here and a brief video overview here. Note: for all PDFs on this page, it is recommended that you download the file, so you can zoom in as needed.
Feedback on these drafts may be shared via the Final Guidelines and DRAFT Rubric Templates Feedback Form through Friday, May 1, 2026. The Phase 2 Team also continues to be available to meet with faculty assemblies/councils and other groups upon request. The goal is to revise these materials based on community feedback and to seek endorsement (from the Faculty Senate and the Council of Academic Deans and Directors) to move to a pilot phase in 2026-2027.
If the conceptual model described below is endorsed for a pilot phase, we would plan to test the model by inviting a small number of academic units [1] to populate their own rubrics and determine how best to operationalize the guidelines for their contexts. The goal for the pilot would be to learn how the proposed guidelines and templates work in practice, make any needed adjustments based on lessons learned, and begin building a set of resources and example banks that subsequent schools/departments could use when developing their own rubrics and implementation plans.
[1] Throughout these guidelines, we use the term “academic unit” to indicate a school, college, department, or program. Some schools do not have departments; others may want a college-wide approach or push it to departments. The idea is that the “unit” developing customized materials and approaches to evaluation ought to be at the level in which evaluation of teaching and faculty work occurs. So, if a Dean typically evaluates faculty on their teaching, the “unit” likely would be the school level. If a Chair typically evaluates faculty on their teaching, the “unit” likely would be the department. Ultimately, academic units need to determine this for themselves, since no single definition will work for all contexts.
Historically, at SLU as at many other institutions, the formal evaluation of teaching has over-relied on end-of-term student feedback. While some academic units have made strides in adopting more holistic evidence-based approaches, evaluation practices across the University remain varied in ways that are inconsistently aligned with literature, and at times, undermine fairness and equity in faculty personnel decisions. For these reasons, the Faculty Senate called for significant change in how faculty are evaluated, with a particular emphasis on reforming the evaluation of teaching.
The Teaching Effectiveness Project was launched to address these concerns. The project aims to better define, document, enhance, evaluate, and recognize effective teaching in meaningful ways that align with SLU’s institutional identity. Phase 1 focused on the creation and approval of SLU’s Teaching Effectiveness Framework. Phase 2 focuses on developing the parameters of a holistic system of teaching evaluation at SLU. When a new system is fully implemented, the evaluation of teaching by full-time faculty members in all academic units should better align with the literature on holistic evaluation of teaching. This will raise the visibility of excellent teaching, make faculty evaluation fairer and more equitable, and enhance teaching effectiveness and student experience across the University.
Since its inception, this Project has consistently been guided by several key commitments:
SLU’s Catholic, Jesuit identity. Our institutional values and identity inform all three dimensions of SLU’s Teaching Effectiveness Framework. Jesuit education is human-centered and prioritizes learning that transcends technical knowledge.
Research-informed teaching. The research on effective teaching and learning is robust and includes published work embedded within all academic disciplines, as well as published work that transcends disciplinary difference (including research from the learning sciences). All the Essential Practices articulated in SLU’s Teaching Effectiveness Framework are supported by research, including those listed as Mission-Aligned practices. Using research-informed teaching practices as the foundation of our system of teaching evaluation – while also creating space for faculty to try new things, drawing on evidence from their own students and courses to understand if/how those new approaches are working – helps to ensure SLU students experience consistently high-quality instruction.
Research-informed evaluation. The research on responsible, holistic evaluation of teaching is now well-documented. Adopting research-informed evaluation practices helps to ensure fairness and equity in faculty performance evaluation. It also helps to reduce the potential impacts of human biases and to raise the visibility of teaching-related work that often falls disproportionately to women faculty and faculty of color.
The highly contextualized nature of teaching. Both the research literature and Ignatian pedagogy make clear there is no one-size-fits-all approach to teaching and no specific instructional methods will be right for all contexts. There are evidence-based features of effective teaching that all instructors should adopt; many of these appear as Essential Practices in SLU’s Teaching Effectiveness Framework. However, the ways in which each instructor demonstrates those Essential Practices will vary, based on discipline, students, teaching philosophies, course levels, course types, and other contextual factors (including instructors’ own development as teachers).
A comprehensive and growth-oriented conception of the work of “teaching.” Teaching is much more than what happens within the confines of a classroom or an online course, and the work of becoming an effective teacher is never complete. Thus, SLU’s Teaching Effectiveness Framework, and the guidelines for holistic evaluation of teaching outlined here, includes the often-invisible work of teaching (such as course design and preparation) and prioritizes instructor development over time.
The fundamentals of holistic, responsible teaching evaluation are well-documented [2], and the DRAFT Guidelines shared with the SLU community in early 2026 align well with the literature. Most of the community’s response to the DRAFT Guidelines came in the form of questions, particularly questions about how the Guidelines might be operationalized.
Therefore, in addition to sharing the final proposed guidelines, the team also is sharing a set of DRAFT Rubric Templates (offered as proof-of-concept prototypes), which would support the holistic evaluation of teaching at SLU. These drafts are being shared as proof-of-concept prototypes for community feedback and discussion. If the conceptual model is endorsed to move forward to a pilot phase, we would test the guidelines and the rubric templates in Phase 3 (beginning in 2026-2027).
Since its inception, the Teaching Effectiveness Project has been committed to addressing fundamental issues in historical practices of teaching evaluation, including a lack of fairness and equity in how teaching is evaluated at SLU. The team believes moving away from an over-reliance on student feedback surveys and adopting the evidence-based approach described in this document will substantially increase fairness and equity in the evaluation of teaching for all faculty. The feedback from the community supports this view, and we look forward to seeing how the guidelines and draft rubric templates work during a pilot phase.
[2] See the Primer on Holistic Evaluation of Teaching on the Teaching Effectiveness Project website.
Based on the actionable feedback collected, the team has made only minor revisions to the proposed Guidelines for the Holistic Evaluation of Teaching. (See the Summary Report: Feedback on the DRAFT Guidelines to learn more about the feedback on the draft and the team’s responses to it.)
Holistic evaluation of teaching aims to arrive at a comprehensive view of one’s teaching, informed by a range of perspectives and multiple sources and types of evidence, for a given review period. Furthermore, during formal review processes, a holistic evaluation should consider both the degree of one’s effectiveness as a teacher and the extent to which one meets performance expectations for teaching, as established by the academic unit/leader.
These are two different, but connected, layers. For instance, a faculty member who is experimenting with a new approach to teaching might find that it takes several semesters to hone the effectiveness of the new approach. However, in a performance evaluation process (e.g., annual review), the fact that the faculty member attempted something new may well meet or even exceed expectations. Thinking about assessments of effectiveness as related to, but different from, performance evaluation helps to create protected space for faculty to experiment with new approaches in their teaching without a built-in negative impact on performance evaluation.
For full-time faculty members, the evaluation of teaching is only one component of their annual or periodic (mid-point, promotion, tenure) evaluations. And for many faculty, their assigned teaching workload includes a range of activities [3], not just course-based instruction.
Figure 1 [LINK to PDF] aims to show the relationships between these elements of evaluation context.
Of course, any faculty member also may receive informal and formative feedback on their teaching, either as part of a larger review process or as part of an individual development plan. (For more on the differences between evaluation of a faculty member and evaluation of their teaching, see Appendix A below.) While the review context for adjunct/part-time faculty, graduate students, and teaching staff look different, the overarching features of holistic evaluation still apply.
[3] We recognize that some of the activities listed under Evaluation of Teaching in Figure A may fall under Service for many faculty, but in cases where these activities do fall under Teaching, they should be considered when a chair or dean is providing a summative evaluation of the faculty member’s teaching performance.
Formal evaluations of teaching will look different in different contexts, depending on the instructor’s appointment type (e.g., full-time faculty, part-time/adjunct faculty, graduate students, staff), the purpose of the evaluation being conducted (e.g., annual review, promotion review), and the cadence of review (e.g., by term, by academic year, over several academic years).
Minimally, formal evaluations of teaching for all full-time faculty (and for all other instructors, to the extent reasonable) must:
Be grounded in SLU’s Teaching Effectiveness Framework for course-based instruction.
The framework articulates a set of Essential Practices that all instructors should strive to adopt to some degree. As such, it represents a set of shared priorities for teaching and provides a foundation for consistency in teaching evaluation across the University. However, as the DRAFT Template for Rubric #1 (below) makes clear, effective expressions of the Essential Practices will look different in different disciplines.
Consider multiple lenses on teaching, including those of the instructor, their students, and (periodically) their peers.
Including multiple lenses on teaching ensures evaluators have a more comprehensive view of one’s teaching and helps to mitigate the impact (positive or negative) of biases that are present in any single perspective on teaching. Note: While peer feedback always has the potential to be useful, peer feedback is not required for annual review, unless an academic unit determines otherwise. Appendix B shows some of the benefits each lens can provide, as well as the types of evidence that might be drawn from each.
Consider multiple forms and types of evidence.
Formal evaluations of teaching should consider evidence from each of the lenses described above and gathered over multiple courses and terms (unless the instructor only teaches a single course within a given evaluation period). Appendix B shows some types of evidence that might be drawn from instructors, students, and peers as part of a holistic evaluation process.
Expect and support Instructor growth and development over time.
While formal evaluations of teaching typically serve summative purposes (i.e., reaching an evaluative conclusion for the purposes of personnel decisions), even those evaluations should support formative purposes (i.e., contribute to individual instructors’ growth and development as teachers). Development over time is expected in SLU’s Teaching Effectiveness Framework, and there should always be feedback instructors can learn from as part of any formal evaluation process, even when the primary focus of an evaluation is summative.
How these expectations are operationalized will vary by unit and by review type (see below for more).
For full-time faculty in all ranks and appointment types, evaluation of teaching performance should consider both the course-based and non-course-based activities that comprise their workload assignment for a given evaluation period. See Appendix C (below) for examples of common course-based and non-course activities that may fall under a faculty member’s assigned teaching workload. The guidelines outlined above should inform both annual and periodic (mid-point, promotion, tenure review) evaluations, though the focus and methods of evaluation likely will differ. Below are some key ways we anticipate annual and periodic evaluation of teaching will differ.
In addition to meeting the overarching guidelines above, annual evaluation of full-time faculty should:
Consider both course-based instruction and (any assigned) non-course teaching activities for the year
For course-based instruction:
a. Consider all courses taught that year
b. Focus on one or more elements of the Teaching Effectiveness Framework, as agreed-upon in annual goal-setting by the faculty member and chair/dean.
c. On an annual basis, a faculty member may opt to work on specific Essential Practices, or on one of the three dimensions. Faculty are not expected to actively work on all elements of the framework on an annual basis. Note: over time, faculty should eventually receive chair/dean feedback on all aspects of the framework.
d. Consider the faculty member’s effectiveness as a teacher using a unit-established rubric. [See DRAFT Template for Rubric #1 below.]
e. Evaluate the faculty member’s alignment with specific performance expectations as established by the chair/unit and faculty member for that year, with attention to relevant contextual factors (e.g., number of courses taught, level of courses taught, number of new vs. existing courses taught, etc.). [See DRAFT Template for Rubric #2 below.]
For non-course teaching activities:
a. Consider all non-course teaching workload activities for that year
b. Evaluate these activities against specific expectations established by the chair/unit and faculty member for that year [See DRAFT Template Rubric #2 below.]
Annual review involves the chair/dean drawing evaluative conclusions by considering the totality of evidence, across course-based and non-course teaching activities, for a given year, as well as the specific context of the faculty member’s annual goals (as agreed-upon with the chair/dean). At the same time, annual reviews also should provide formative feedback to guide future growth and development.
In addition to meeting the overarching guidelines above, periodic (mid-point/tenure/promotion) evaluation of full-time faculty should:
Consider both course-based instruction and (any assigned) non-course teaching activities for the review period
For course-based instruction:
a. Consider all courses and terms taught during the review period
b. Address all elements of the Teaching Effectiveness Framework. Note: for mid-point evaluation, formative feedback should indicate areas of improvement/growth needed.
c. Consider the faculty member’s effectiveness as a teacher using a unit-established rubric. [See DRAFT Template for Rubric #1 below.]
d. Evaluate the faculty member’s teaching against specific expectations established by the unit for the given level of promotion, with attention to relevant contextual factors (e.g., number of courses taught, level of courses taught, number of new vs. existing courses taught, etc.) [See DRAFT Template for Rubric #2 below.]
For non-course teaching activities:
a. Consider all non-course teaching workload activities for all terms of the review period
b. Evaluate these activities against specific expectations established by the unit for a given level of promotion [See DRAFT Template Rubric #2 below.]
Periodic review involves a broader evaluation of the extent to which the faculty member meets expectations for promotion/criteria. It also should consider the totality of evidence, across course-based and non-course teaching activities, for the given evaluation period. At the same time, mid-point and promotion review also should provide formative feedback to guide future growth and development.
For instructors who are not full-time SLU faculty, evaluation will necessarily look different. The cadence and focus of teaching evaluation in these cases will vary.
Adjunct/part-time faculty and non-faculty instructors (such as graduate student instructors/TAs [4] and staff members) often have more limited agency in their teaching and/or may teach only one course or teach only infrequently. Other types of instructors – such as those teaching SLU students in clinical settings or those training SLU students on flight simulators, for example – teach in highly-individualized, highly-specialized learning contexts for which the Essential Practices of the Teaching Effectiveness Framework may not be fully applicable.
Certainly, in many of these situations, it will not be possible to evaluate teaching in ways that fully align with the expectations listed above. However, for course-based instruction, academic leaders (whoever oversees the instructor) should do their best to ensure the following expectations are met, to the extent possible:
The expectations for effective teaching should be clearly articulated. [See DRAFT Template for Rubric #1 below.]
All instructors should receive formative feedback on their teaching, on a regular basis (if applicable). For graduate student Teaching Assistants, this should include direct feedback from a faculty supervisor, program director, course coordinator, chair, or dean whenever possible.
In cases where evaluative judgments will inform personnel decisions – for instance, whether or not an instructor will be invited to continue teaching at SLU – such judgments should be based on multiple sources of evidence, representing multiple lenses on teaching, and across multiple courses/terms (if applicable). Consistent with existing policy, student feedback cannot be the “sole measure” of the instructor’s performance for the purposes of personnel decisions.
[4] It is important to note that these guidelines refer to graduate student instructors/Teaching Assistants who serve as the main course instructor. The guidelines are not appropriate for graduate students who support a faculty member as main course instructor or for other “teaching assistant” like roles, such as undergraduate TAs, undergraduate Learning Assistants, student Peer Mentors, and the like.
Understanding how the guidelines above will be operationalized is critical to moving forward with large-scale adoption of this type of evaluation system. There is much we cannot know, in the abstract, about what this might look like on the ground and what kinds of additional resources, workload credit, training, and more would be needed to make it work efficiently.
The Phase 2 Team expects that the guidelines will be operationalized differently in different contexts. That is why the team is recommending the next step in this initiative be a pilot phase in which a small number of academic units would develop customized rubrics (see Phase 3 Overview and Anticipated Pilot Activities below), create plans for implementation, and identify what must be refined before a broader rollout. Pilot groups would help to clarify a number of issues SLU faculty have already identified as critical to this project’s success, including articulating workload considerations, identifying an efficient accountability model, and more.
To create an element of consistency – while also allowing for significant disciplinary/unit-level customization – the team has developed draft templates for two rubrics, which would serve as a starting point for operationalizing the guidelines.
The DRAFT Rubric Templates are shared as proof-of-concept prototypes. They are designed to be populated by academic units. The templates address two different, but related, aspects of holistic teaching evaluation: assessing effectiveness (Rubric #1 template) and evaluating performance (Rubric #2 template). Table 1 summarizes the intended relationship between the rubrics. Figure 2 offers a visual representation of how assessment of effectiveness contributes to the holistic evaluation of teaching, which contributes to the holistic evaluation of the faculty member.
As you will see, the templates offer some elements of consistency, but the expectations in both are ultimately defined by academic units. This approach addresses concerns raised by SLU faculty about the need to ensure an appropriate balance of consistency across the institution and disciplinary autonomy within academic units.
It is important to note that academic units already have criteria for evaluating faculty teaching performance, but they may not all be written down in ways that promote fair and equitable evaluation practices. Even in cases where academic units have clearly articulated expectations for teaching, those expectations now need to be reconciled with SLU’s adopted Teaching Effectiveness Framework.
Phase 3 (beginning in 2026-2027) likely would involve several groups working on different aspects of the Teaching Effectiveness Project. Here are three kinds of work we expect will begin next year.
Consistent with the literature, student feedback will remain a component of holistic teaching evaluation at SLU, though it will play a different role in formal evaluations than it has done previously. While evaluation practices have relied almost solely on the Blue End-of-Term Course Feedback Surveys, many faculty have long incorporated other forms of student feedback in their own teaching practice (such as mid-semester feedback surveys, notecard activities asking students for items to start-stop-change, and/or the Reinert Center’s Small Group Instructional Feedback (SGIF) sessions). Additionally, since the Teaching Effectiveness Project’s inception, the project teams have consistently received questions about, and feedback on, the Blue Surveys.
Looking ahead to Phase 3, we anticipate at least three kinds of work under the broad umbrella of student feedback on teaching:
Raising awareness about the many ways instructors can solicit and engage with student feedback on teaching. This will involve working with the Reinert Center for Transformative Teaching and Learning to raise awareness of SGIF sessions and to develop other resources that highlight different ways, and purposes for, soliciting meaningful feedback from students about their course-based learning experiences. In this phase, it also will be important to understand the potential impact support resources at SLU, including the Reinert Center, if there is increasing demand for their services in this area.
Enhancing efforts to better support students in providing useful feedback on teaching. In the spring of 2024, the Provost’s Office published a Student Guide on End-of-Term Course Feedback Surveys (it is available in multiple formats to promote ease-of-engagement, including promoting the ability of instructors to incorporate it into their own courses). Framed by the components of the Ignatian Pedagogical Paradigm, the guide provides tips for students on engaging in this feedback process in ways that align with SLU’s values. Throughout the Teaching Effectiveness Project, teams have heard that additional education for students would be helpful.
Determining how faculty and administrators can responsibly incorporate student feedback into the holistic evaluation of teaching. This work will involve developing resources/trainings to ensure responsible use of student feedback going forward. It also will involve pilot groups working through important questions about the role and place of student feedback as they operationalize the shift to a holistic evaluation of teaching. Some questions the pilot groups will seek to answer are:
In addition to the Blue Surveys, what other sources of student feedback might be appropriate within the unit? All students enrolled in SLU courses are invited to complete the Blue Surveys, but these are only one form of student feedback [see Appendix B]. The Phase 3 Project team(s) will guide pilot groups as they decide on more options for collecting student feedback.
Where does/might student feedback serve as evidence in the rubrics? As pilot groups develop, test, and revise their rubrics, where is student feedback most visible? Most useful? How should student feedback not be used when assessing effectiveness (Rubric #1) and evaluating performance (Rubric #2)?
How will student feedback be considered alongside self-reflection and peer feedback?
Updating Blue Surveys and related processes. This work will be done in partnership with those in the Provost’s Office who oversee the Blue system (Steve Sanchez and Marissa Cope), who have been holding off on initiating conversations about updates until the Teaching Effectiveness Project established clear expectations for holistic evaluation of teaching. Aside from adding (in winter 2023) a research-validated introduction to the surveys that aims to reduce the impact of biases in student responses, the Blue survey instrument has not been updated since 2018-2019. [5] From the inception of the project, the Co-Chairs have been collecting suggestions, questions, and concerns about the Blue Surveys. That data will inform the work of Phase 3. As part of Phase 3 of this project, then, it will be important to address key questions about the next iteration of the surveys, including:
How does/should the Blue Survey align with SLU’s Teaching Effectiveness Framework? How does/should it fit into the overarching system of holistic teaching evaluation?
What changes need to be made to the Blue Survey instrument itself (questions, framing, etc.) to make the survey an effective tool for collecting student feedback as part of a holistic system of evaluation? Proposed changes may include things coming out of emerging research or well-documented pain points, but they also should be based on the place/role of the Blue Surveys in the broader system of evaluation.
What changes need to be made to the implementation of the Blue Survey (timing, course types, etc.)? Proposed changes may be needed to address faculty concerns about response rates, timing of survey openings/closings, and more.
Peer feedback on teaching can take many forms. Different institutions approach this in different ways. For example, some institutions leave this wholly to individual departments, while others create an institution-wide “peer review corps” that can serve faculty in any discipline. For SLU, no decisions have been made about the form or nature of peer feedback. In general, we expect to leave to academic units the details of what might work best for them, but we also anticipate SLU might also develop a peer review corps for units that want to draw from a broader pool of faculty for peer feedback.
Using feedback and suggestions the projects teams have already received (and will continue to receive), Phase 3 will begin to plan for what peer feedback on teaching at SLU might look like. This work must be informed by the Phase 3 pilot groups, as well. Some of the questions that will need to be addressed include:
What counts as peer feedback? The Phase 2 Team has repeatedly noted that peer feedback need not be a formal peer observation of a live class period. The Phase 3 team(s) will develop parameters and guidance on different forms of peer feedback and how they might be used as evidence in the holistic evaluation of teaching. Pilot groups also will help to identify how peer feedback might be aligned to the rubrics they are creating and testing. No decisions about the forms of feedback have been made in advance of the next phase, and there likely will not be a single answer that works for all contexts.
Who can/should provide peer feedback (and when)? There are many definitions of “peer” in the literature on the holistic evaluation of teaching [see Appendix B]. These can include colleagues within one unit, colleagues within the university, and colleagues at other institutions. The Phase 3 team(s) will consider the benefits and challenges of these groups to make recommendations for this process at SLU, but no decisions about who will provide this feedback have been made in advance of the next phase, and there likely will not be a single answer that works for all contexts.
What kind of training/support would peers need to provide effective feedback? SLU faculty have repeatedly indicated that it will be important to offer development and support for faculty peer reviewers. While no decisions have been made about what this might look like at SLU, there are good models we can look to at other institutions, and the Reinert Center already works with groups on effective strategies for peer feedback on teaching.
How can we reduce bias in peer feedback? As with any kind of feedback or evaluation, there will be bias in peer feedback. The Phase 3 team(s) working on peer feedback will consider strategies and methods for mitigating bias in peer feedback. These might include a toolkit like the Student Guide and/or a series of training sessions for peer reviewers. No decisions about the mitigation efforts have been made in advance of the next phase.
Where/how will peer feedback on teaching be accounted for in faculty workload? This is a very common question from SLU faculty who are engaged in this project, and the Phase 3 team(s) addressing peer feedback will need to make recommendations for this important topic. While no decisions have been made about this, it is likely that this will look different in different units. (For instance, in some colleges, there may be a group of faculty who serve as peer reviewers for an academic year and who have that work recognized in their Service workload.) Minimally, there should be clear expectations for, and accounting of, peer feedback somewhere in faculty members’ workloads.
Perhaps the most important work of Phase 3 will be the pilots to begin operationalizing the recommendations made in these Guidelines and to test the DRAFT Rubric Templates. The Phase 2 Team has, and will continue to, collect community feedback that informs the work of the pilot groups in Phase 3. The team anticipates that the pilot groups will vary in size, structure, and discipline, and they will represent a variety of faculty roles. The groups will develop their own rubrics, using the templates provided in this document, and they will develop plans for operationalizing the Guidelines.
Based on the feedback we have now, some of the questions these groups will consider include:
What are the acceptable forms of evidence and how much should be provided?
How does quantitative data regarding teaching workload get considered (in practice)?
In what ways, if any, does this work put more scrutiny on teaching-focused faculty and/or have an outsized impact on their ability to be promoted?
What training do chairs and deans need? What form might those trainings take? How often will they happen?
What kinds of support and development do academic units need to engage in this work? (And what are the potential resource implications for the Reinert Center and other campus resources?)
How does this system, and the tools supporting it, promote or inhibit teaching innovation?
What else do units need to adjust in order to support this approach? Do units need to revise workload and P&T guidelines as a result of this work? In what ways?
What, if any, inequities become visible as this work begins to be operationalized and tested?
What kinds of accountability mechanisms should be in place to ensure fidelity to the spirit of these Guidelines? Where should that mechanism be housed? (For example, a Faculty Senate committee? Something else?)
Does this create a system of responsible, holistic evaluation that is transparent, feasible, equitable, and sustainable?
[5] In 2018, the Provost’s Office led a broad faculty engagement process to determine which changes needed to be made. The changes at that time were relatively minimal based on direction from SLU faculty members.