This week we were assigned to read a research paper related to our projects, and I chose to read "Will Teachers Receive Higher Student Evaluations by Giving Higher Grades and Less Course Work?" by John A. Centra.
This paper summarized the findings of a large-scale study over 55,000 courses at all levels in various subjects into the correlation between the grades received and student evaluations of their quality. This paper differs from the many others in the same niche topic by a few factors:
Due to the size and organization of this analysis, the results are much more consistent than in other papers on the topic of student grades and evaluations. Although the general belief in universities and colleges is that professors who give higher grades receive higher ratings (in other studies enumerated by a ~0.20 correlation between grade and evaluation), this was not supported by the analysis on any of the subject areas in Centra's study. In fact, expected grade correlated with overall evaluation by only 0.11. Even this level of correlation does not indicate that professors pander to students in order to receive more favorable ratings--Centra notes that students who receive a higher grade often enjoy the class more and rate the professor more highly. Similarly, a professor who teaches in an accessible way will often impart more information onto students, leading to both higher grades and high teaching evaluations. Overall, difficulty and pace had a greater influence on evaluation than expected grades in all subjects.
The most interesting and most highlighted result of Centra's work is that course evaluations and course difficulty have a quadratic relationship for most subjects. Courses that were evaluated as being too difficult were rated poorly, but so were courses that were rated as too easy or too elementary. This indicates that students respond to courses from their own levels of preparation for the material previous to entering the course. Professors who miscalculate the level of student preparation are the ones who receive the lowest evaluation, not those who make the course easy.
This research strongly relates to Anlan and my research topic of examining course evaluations and student performance/retention. One of our concerns for the research was that we would be predicting course performance off of skewed data--meaning that we'd be predicting grades based off of evaluations that too were impacted by grades--but this study implies that evaluations don't function on a quid pro quo basis. However, there are a couple of key differences between Centra's work and our own. Firstly, George Mason University does not use an evaluation format similar to the SIR II. The GMU evaluation survey does not indicate questions that prompt students to consider their own learning in the course except for the " The assignments (projects, papers, presentations, etc.) helped me learn the material" question, which is a very large departure from the style of Centra's evaluation. In addition, the question answers are not given on a relative basis, meaning that the scoring is absolute within the course instead of in relation to other courses. Further, GMU offers no questions that estimate student workload. In addition, Centra notes that for engineering and technology courses, workload/difficulty of the course were more strongly negatively correlated with overall course evaluation than for other subjects (where the correlation was occasionally a positive one).
Overall, this paper gave me much more of an insight into the details of student perceptions of courses and how they might affect the course evaluation.
Along reading the research paper and looking up unknown terms and procedures, this week brought a lot of independent learning about various statistical concepts and DM techniques. Here are some of the things I learned about: