a) Establish and fund a program to support improvements in the assessment of learning outcomes and program review.
b) Adopt a new course evaluation instrument.
c) Implement a multidimensional approach to teaching evaluation.
d) Use the data from the improved teaching evaluation approach as the basis for issues addressed in faculty development programs.
Student evaluations are as reliable as peer evaluations, provided that response rates are good (Paulsen 2002).
Completion of student course evaluations is imperative in evaluating curricular trends and teaching effectiveness, particularly if no other assessment methods are performed (Hatfield & Coyle 2013).
Faculty members receiving the best evaluations were not always the most effective teachers according to students (Surratt & Desselle 2007).
1970 - The Dr. Fox Effect (Wikipedia)
Direct Link: http://www.youtube.com/watch?v=RcxW6nrWwtc
Groups of students & professionals were given lectures varying in content coverage.
Our Challenge: How Do We Get Students to Complete Course Evaluations?
Results of course evaluations completed earlier in a course are highly correlated with results of course evaluations completed finals week or after (McNulty et al. 2010).
The Amount
Another barrier is the total number of evaluations students had to complete, and how many courses they had to evaluate (Cottreau & Hatfield 2001).
The Method
Compared with paper surveys, online evaluations have been associated with increased response rates (Barnett & Matthews 2009; Anderson et al. 2005; Thorpe 2002; Hatfield & Coyle 2013).
The Student's Motivation: Why Bother?
Higher response rates and higher evaluation scores are routinely seen in courses where students were highly motivated and had high grade expectations (Kidd & Latif 2003; Surratt & Desselle 2007; Hatfield & Coyle 2013).
Students were more likely to respond if they knew how their evaluations would be used and what decisions their responses would influence (Kidd & Latif 2003, Anderson et al. 2005; Cottreau & Hatfield 2001; Hatfield & Coyle 2013).
The Demographics of Students
A study done by at the University of Houston found the following variable to be significant in explaining course evaluation completion (Hatfield & Coyle 2013):
**Disclaimer: These rough calculations are provided for 'entertainment purposes only' and do not substitute official university statistics.**
Faculty Personnel Policy - Article 9, Professional Responsibilities, Section 3N & 3O:
Arranging for student evaluations of all classroom teaching for each term is a regular part of the responsibilities of full-time faculty members. The form utilized may be the Campus accepted form or an alternative form approved by the Department or Program, the appropriate Dean, and the Vice Chancellor for Academic Affairs, completed by students anonymously and unavailable to the faculty member until grades for a given semester have been transmitted to the Registrar. Where an alternative is used, the Provost’s Office shall summarize the results and forward the summary to the Personnel File where it shall be retained permanently.
In addition to using the required standard evaluations, some faculty, departments, and programs develop and have students administer supplemental evaluations of courses and teaching. Since supplemental evaluations are formative in nature, faculty may choose how they are administered and documented. If faculty elect to develop and use a supplemental evaluation form they may use the standard course evaluation distribution and/or collection process. The supplemental evaluation packets may be deposited along with the standard course evaluation in the course evaluation drop boxes. After final grades for the semester are submitted to the Registrar, the Provost's Office will return the evaluations to the faculty member.
5 Weeks prior to last day of class -- The Faculty Files Office emails instructors teaching onground and blended courses requesting they notify office if they prefer to have their course(s) for that term evaluated through the online process.
3 Weeks prior to last day of class -- The Faculty Files Office emails instructors who intent to have their course evaluations completed in the classroom to pick up their packets.
3 Weeks prior to the last day of class -- Faculty of online classes receive an e-mail from the Faculty Files Office notifying them that the online evaluation system is available for students.
Administration of UIS Course Evaluations
On-Campus
Instructions for administering course evaluations in the classroom are included with each evaluation packet.
Identify a student to be responsible for administering, collecting and depositing the completed evaluation packet in one of the course evaluation drop boxes, which are located throughout classroom buildings and identified on the instruction sheet.
Faculty are required to leave the classroom while students complete their evaluations.
Online
Faculty teaching online courses are required to use the online course evaluation system (https://uisapp-s.uis.edu/evaluation/). The evaluations became available today (April 15).
All evaluations must be completed no later than Saturday, May 4th
The Faculty Files Office collects the completed evaluation packets from the drop boxes and enters the data into the course evaluation database.
The Faculty Files Office generates a summary report for each faculty member’s permanent personnel file for each course taught during a given semester.
An email notification is sent to the faculty notifying them that their evaluation summaries are available online and the course evaluation forms, which include handwritten student comments, are then returned to the faculty member.
Team Taught Courses - Each instructor is evaluated individually, with the process being identical to the standard course evaluation.
Alternative Evaluations - The process is expected to be identical to the standard course evaluation.
Supplemental Evaluations - Faculty may choose how supplemental course evaluations are administered and documented.
Library Faculty - See Faculty Personnel Policy, Appendix 11 for guidelines & process.
1. Your current class standing: (1) undergraduate (2) graduate
2. Your sex: (1) female (2) male
3. Grade you expect to receive in this class (1)A (2) B (3) C (4) D (5) U (6) I (7) CR (8) NC
4. I took this course as: (1) an elective (2) a program requirement
5. As a result of taking this course, my interest in this subject has: (1) decreased (2) remained the same (3) increased
6. This course has increased my skills in critical thinking: (1) yes (2) no
7. The instructor’s presentation is well planned and organized: (1) yes (2) no
8. Do you think this teacher is competent in the content or material offered in this course: (1) Incompetent (2) (3) Satisfactory (4) (5) Exceptionally Competent
9. This course has motivated me to work at my highest level: (1) yes (2) no
10. Overall, how do you rate the quality of this person as a teacher: (1) poor (2) fair (3) good (4) very good (5) excellent
How are Evaluations Used at UIS?
Strategies for Increasing Course Evaluation Response Rate
Ask for feedback earlier in the semester.
Post course evaluation announcements as many times and in as many places as you can:
Sample Announcement
Today, course evaluations are open online. These are very important to improving the quality of classes at UIS. They also are an important instrument used in the promotion and tenure process for faculty members. Please take a few moments to fill out the evaluations for this class and any others you may be taking that have online evaluations: https://uisapp-s.uis.edu/evaluation/ These evaluations are available only through Saturday, May 4. (Thanks!)
Tips for Getting Additional Feedback from Students
Supplemental evaluations - http://blogs.uis.edu/colrs/2013/04/11/supplemental-evaluation/
Web toolbox forms for specific assignments and large projects: http://illinois.edu/webtools/
Answers to Faculty Concerns About Online Versus In-class Administration of Student Ratings of Instruction (SRI)
from Chapter 7: Online Ratings, in the book, Student Ratings of Instruction: A Practical Approach to Designing, Operating, and Reporting. By Nira Hativa, foreward by Michael Theall and Jennifer Franklin.
Many faculty members express reservations about online SRIs. To increase their motivation and cooperation, it is essential to understand the underlying reasons for their resistance and to provide them with good answers to counter their reservations and diffuse their concerns. The following are research-based answers to four major faculty concerns about online SRIs.
Concern 1: The online method leads to a lower response rate [which may have some negative consequences for faculty].
Participation in online ratings is voluntary and requires student motivation to invest time and effort in completing the forms. Faculty are concerned that these conditions will produce a lower response rate that may reduce the reliability and validity of the ratings, and which may have some negative consequences for them.
The majority of studies on this issue found that indeed, online ratings produce a lower response rate than in-class ratings (Avery, Bryant, Mathios, Kang, & Bell, 2006; Benton, Webster, Gross, & Pallett, 2010 ; IDEA, 2011; Nulti, 2008). Explanations are that in-class surveys are based on a captive audience, and moreover, students in class are encouraged to participate by the mere presence of the instructor, his/her expressed pressure to respond, and peer pressure. In contrast, in online ratings, students lack motivation or compulsion to complete the forms or they may experience inconvenience and technical problems (Sorenson & Johnson, 2003).
Concern 2: Dissatisfied/less successful students participate in the online method at a higher rate than other students.
Faculty are concerned that students who are unsuccessful, dissatisfied, or disengaged may be particularly motivated to participate in online ratings in order to rate their teachers low, blaming them for their own failure, disengagement, or dissatisfaction. Consequently, students with low opinions about the instructor will participate in online ratings at a substantially higher rate than more satisfied students.
If this concern is correct, then the majority of respondents in online surveys will rate the instructor and the course low, and consequently, the rating distribution will be skewed towards the lower end of the rating scale. However, there is robust research evidence to the contrary (for both methods—on paper and online), that is, the distribution of student ratings on the Overall Teaching item is strongly skewed towards the higher end of the scale.
Online score distributions have the same shape as the paper distributions—a long tail at the low end of the scale and a peak at the high end. In other words, unhappy students do not appear to be more likely to complete the online ratings than they were to complete paper ratings (Linse, 2012).
The strong evidence that the majority of instructors are rated above the mean of the rating scale indicates that the majority of participants in online ratings are the more satisfied students, refuting faculty concerns about a negative response bias. Indeed, substantial research evidence shows that the better students, those with higher cumulative GPA or higher SAT scores, are more likely to complete online SRI forms than the less good/successful students (Adams & Umbach, 2012 ; Avery et al., 2006; Layne, DeCristoforo, & McGinty, 1999; Porter & Umbach, 2006; Sorenson & Reiner, 2003).
The author examined this issue at her university for all undergraduate courses in two large schools: Engineering and Humanities (Hativa, Many, & Dayagi, 2010). The number of participating courses was 110 and 230, respectively, for the two schools. At the beginning of the semester, all students in each of the schools were sorted into four GPA levels. The lowest 20% of GPAs in a school formed the Poor group whereas the highest 20%, the Excellent group. The two intermediate GPA levels formed, respectively, the Fair and Good groups, with 30% of the students in each. Results show that the rate of response for the Poor, Fair, Good and Excellent groups were respectively for the school of humanities: 35, 43, 43, and 50, and for the school of engineering: 48, 60, 66 and 72.
In sum, this faculty concern is refuted and even reversed—the higher the GPA, the larger the response rate in the online method so that the least successful students seem to participate in online ratings at a lower rate than better students.
Concern 3: The lower response rate (as in Concern 1) and the higher participation rate of dissatisfied students in online administration (as in Concern 2) will result in lower instructor ratings, as compared with in-class administration.
Faculty members are concerned that if the response rate is low (e.g., less than 40% as happens frequently in online ratings), the majority of respondents may be students with a low opinion of the course and the teacher, lowering the “true” mean rating of the instructor.
Research findings on differences in average rating scores between the two methods of survey delivery are inconsistent. Several studies found no significant differences (Avery et al., 2006; Benton et al., 2010; IDEA, 2011; Linse, 2010; Venette, Sellnow, & McIntyre, 2010). Other studies found that ratings were consistently lower in online than on paper, but that the size of the difference was either small and not statistically significant (Kulik, 2005) or large and statistically significant (Chang, 2004).
The conflicting findings among the different studies can be explained by differences in the size of the population examined in these studies (from dozens to several thousand courses), the different instruments used (some of them may be of lower quality), and the different research methods. Nonetheless, the main cause of variance between findings in the different studies is probably whether participation in SRI is mandatory or selective. If not all courses participate in the rating procedure rather only those selected by the department or self-selected by the instructor, the courses selected and their mean ratings may not be representative of the full course population and should not be used as a valid measure for comparison.
The author examined this issue in two studies that compared mean instructor ratings in paper- and online SRI administration based on her university data, with mandatory course participation. The results of both studies are presented graphically and reveal a strong decrease in annual mean and median ratings from paper to online administration. The lower online ratings cannot be explained by a negative response bias—by higher participation rate of dissatisfied students, because as shown above, many more good students participate in online ratings than poor students. A reasonable explanation is that online ratings are more sincere, honest, and free of teacher influence and social desirability bias than in-class ratings.
The main implication is that comparisons of course/teacher ratings can take place only within the same method of measurement—either on paper or online. In no way should ratings in both methods be compared. The best way to avoid improper comparisons is to use a single method of rating throughout all courses in an institution, or at least in a particular school or department.
Concern 4: The lower response rate and the higher participation rate of dissatisfied students in online administration will result in fewer and mostly negative written comments
Faculty members are concerned that because the majority of expected respondents are dissatisfied students, the majority of written comments will be negative (Sorenson & Reiner, 2003). An additional concern is that because of the smaller rate of respondents in online surveys, the total number of written comments will be significantly reduced compared to in-class ratings. The fewer the comments written by students, the lower the quality of feedback received by teachers as a resource for improvement.
There is a consensus among researchers that although mean online response rates are lower than in paper administration, more respondents write comments online than on paper. Johnson (2003) found that while 63% of the online rating forms included written student comments, only less than 10% of in-class forms included such comments. Altogether, the overall number of online comments appears to be larger than in the paper survey.
In support:
On average, classes evaluated online had more than five times as much written commentary as the classes evaluated on paper, despite the slightly lower overall response rates for the classes evaluated online (Hardy, 2003, p. 35).
In addition, comments written online were found to be longer, to present more information, and to pose fewer socially desirable responses than in the paper method (Alhija & Fresko, 2009). Altogether, the larger number of written comments and their increased length and detail in the online method, provide instructors with more beneficial information and thus the quality of online written responses is better than that of in-class survey comments.
The following are four possible explanations for the larger number of online comments and for their better quality:
· No time constraints: During an online response session, students are not constrained by time and can write as many comments and at any
length as they wish.
· Preference for typing over handwriting: Students seem to prefer typing (in online ratings) to handwriting comments.
· Increased confidentiality: Some students are concerned that the instructor will identify their handwriting if the comments are written on
paper.
· Prevention of instructor influence: Students feel more secure and free to write the honest truth and candid responses online.
Regarding the favorability of the comments, students were found to submit positive, negative, and mixed written comments in both methods of rating delivery, with no predominance of negative comments in online ratings (Hardy, 2003). Indeed, for low-rated teachers—those perceived by students as poor teachers—written comments appear to be predominantly negative. In contrast, high-rated teachers receive only few negative comments and predominantly positive comments.
In sum, faculty beliefs about written comments are refuted—students write online more comments of better quality that are not mostly negative but rather represent the general quality of the instructor as perceived by students.
References
Adams, M. J. D., & Umbach, P. D. (2012). Nonresponse and online student evaluations of teaching: Understanding the influence of salience, fatigue, and academic environments. Research in Higher Education, 53 , 576-591.
Alhija, F. N. A., & Fresko, B. (2009). Student evaluation of instruction: What can be learned from students' written comments? Studies in Educational Evaluation, 35 (1), 37-44.
Avery, R. J., Bryant, W. K., Mathios, A., Kang, H., & Bell, D. (2006). Electronic course evaluations: Does an online delivery system influence student evaluations? The Journal of Economic Education, 37 (1), 21-37.
Benton, S. L., Webster, R., Gross, A. B., & Pallett, W. H. (2010). An analysis of IDEA student ratings of instruction using paper versus online survey methods, 2002-2008 data IDEA Technical Report no. 16 : The IDEA Center.
Chang, T. S. (2004). The results of student ratings: Paper vs. online. Journal of Taiwan Normal University, 49 (1), 171-186.
Hardy, N. (2003). Online ratings: Fact and fiction. In D. L. Sorenson & T. D. Johnson (Eds.), Online student ratings of instruction. New Directions for Teaching and Learning (Vol. 96, pp. 31-38). San Francisco: Jossey-Bass.
Hativa, N., Many, A., & Dayagi, R. (2010). The whys and wherefores of teacher evaluation by their students. [Hebrew]. Al Hagova, 9 , 30-37.
IDEA. (2011). Paper versus online survey delivery IDEA Research Notes No. 4 : The IDEA Center.
Johnson, T. D. (2003). Online student ratings: Will students respond? In D. L. Sorenson & T. D. Johnson (Eds.), Online student ratings of instruction. New Directions for Teaching and Learning (Vol. 96, pp. 49-59). San Francisco: Jossey-Bass.
Kulik, J. A. (2005). Online collection of student evaluations of teaching Retrieved April 2012, from http://www.umich.edu/~eande/tq/OnLineTQExp.pdf
Layne, B. H., DeCristoforo, J. R., & McGinty, D. (1999). Electronic versus traditional student ratings of instruction. Research in Higher Education, 40 (2), 221-232.
Linse, A. R. (2010, Feb. 22nd). [Building in-house online course eval system]. Professional and Organizational Development (POD) Network in Higher Education, Listserv commentary.
Linse, A. R. (2012, April 27th). [Early release of the final course grade for students who have completed the SRI form for that course]. Professional and Organizational Development (POD) Network in Higher Education, Listserv commentary.
Nulti, D. D. (2008). The adequacy of response rates to online and paper surveys: What can be done? Assessment and Evaluation in Higher Education, 33 , 301-314.
Porter, R. S., & Umbach, P. D. (2006). Student survey response rates across institutions: Why do they vary? Research in Higher education, 47 (2), 229-247.
Sorenson, D. L., & Johnson, T. D. (Eds.). (2003). Online student ratings of instruction. New Directions for Teaching and Learning (Vol. 96). San Francisco: Jossey-Bass.
Sorenson, D. L., & Reiner, C. (2003). Charting the uncharted seas of online student ratings of instruction. In D. L. Sorenson & T. D. Johnson (Eds.), Online student ratings of instruction. New Directions for Teaching and Learning (Vol. 96, pp. 1-24). San Francisco: Jossey-Bass.
Venette, S., Sellnow, D., & McIntyre, K. (2010). Charting new territory: Assessing the online frontier of student ratings of instruction. Assessment & Evaluation in Higher Education, 35 (1), 97-111.