Teaching Effectiveness vs. Teaching Evaluations

Teaching Effectiveness vs. Teaching Evaluations

Header note, this quote appears at the top of the cited article by Kornell and Hausman (1)

By Megan Sumeracki

For most of us who teach in higher education, the end of the semester means administering teaching evaluations and can be a bit stressful for some. However, the beginning of each semester means reading those evaluations. In theory, these evaluations would allow us to see our courses through our students’ eyes, providing wonderful insights into the areas in excel and other areas for improvement. Personally, I do feel that some of my students’ hand-written qualitative comments can be constructive and useful. However, there’s a lot of reason to be wary of teaching evaluations overall. Some research has shown biases in teacher evaluations by students, and you can read more about this in the blog written by Cindy here.

In addition to this, an analysis published by Nate Kornell and Hannah Hausman (1) demonstrated that the best teachers—those that fostered meaningful learning in their students—do not always get the best ratings. In their analysis, they looked at numerous studies evaluating the relationship between student evaluations of teaching and educational experience, performance in the course, and performance in subsequent related courses.

When Kornell and Hausman reviewed the literature, they concluded that there was a small relationship between teaching evaluations and performance in the course. The studies examined used grades on a common final exam (i.e., the final exam was the same across multiple sections of the course with different instructors) as the measure of performance in the course. In other words, students that got better grades on the final were slightly more likely to rate their teachers as more effective. They concluded that this relationship was very small and that student grades on a final common exam did not account for a lot of the variance in teaching evaluations provided by the students. This paper was published in 2016. In the next year, Uttl and colleagues (2) published a large meta-analysis, or a systematic mathematical analysis of the effect sizes across studies, on the topic. They concluded that there was no relationship between instructor evaluations and student learning (using measures of overall course performance, final exams, and other achievement tests). Taken together, it seems that learning does not much influence students’ teaching evaluations of their professors, and may not influence evaluations at all!

What was particularly interesting about the Kornell and Hausman analysis was that they also looked at how teaching evaluations were related to performance in subsequent related classes. In other words, they wondered whether students that rated their teachers as more effective during evaluations were better prepared for related classes that they too later, and thus performed better in those courses. This is extremely important if our long-term goal is to help our students learn and be able to apply what they have learned later on. The studies that Kornell and Hausman analyzed specifically examined how much instructors contributed to students’ knowledge in the later courses (called value-added measures). However, they found no relationship between teaching evaluations and student performance on subsequent related courses. Looking at the best controlled studies on this topic (3,4), instructors who contributed more to students’ knowledge in later courses actually had the lower teaching evaluations from their students! Thus, if the goal is for an instructor to help their students learn and be better equipped to apply the knowledge later in future courses, then those with the lower teaching evaluations did the best job.