# Evaluating students’ evaluations of professors

This paper contains some bizarre observations:

Michela Braga, Marco Paccagnella, Michele Pellizzari, Evaluating students’ evaluations of professors. Economics of Education Review 41 (214) 71-88.
Abstract: This paper contrasts measures of teacher effectiveness with the students’ evaluations for the same teachers using administrative data from Bocconi University. The effectiveness measures are estimated by comparing the performance in follow-on coursework of students who are randomly assigned to teachers. We find that teacher quality matters
substantially and that our measure of effectiveness is negatively correlated with the students’ evaluations of professors. A simple theory rationalizes this result under the assumption that students evaluate professors based on their realized utility, an assumption that is supported by additional evidence that the evaluations respond to
meteorological conditions.

# Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related

An interesting paper:

Bob Uttl, Carmela A.White, Daniela Wong Gonzalez, Meta-analysis of faculty’s teaching effectiveness:  Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, Volume 54, September 2017, Pages 22-42.

Abstract: Student evaluation of teaching (SET) ratings are used to evaluate faculty’s teaching effectiveness based on a widespread belief that students learn more from highly rated professors. The key evidence cited in support of this belief are meta-analyses of multisection studies showing small-to-moderate correlations between SET ratings and student achievement (e.g., Cohen, 1980, 1981; Feldman, 1989). We re-analyzed previously published meta-analyses of the multisection studies and found that their findings were an artifact of small sample sized studies and publication bias. Whereas the small sample sized studies showed large and moderate correlation, the large sample sized studies showed no or only minimal correlation between SET ratings and learning. Our up-to-date meta-analysis of all multisection studies revealed no significant correlations between the SET ratings and learning. These findings suggest that institutions focused on student learning and career success may want to abandon SET ratings as a measure of faculty’s teaching effectiveness.

The epigraph is great:

For every complex problem there is an answer that is clear, simple, and wrong.” H. L. Mencken

BiBTeX:
@article{UTTL201722,
title = "Meta-analysis of faculty's teaching effectiveness: Student evaluation of teaching ratings and student learning are not related",
journal = "Studies in Educational Evaluation",
volume = "54",
number = "",
pages = "22 - 42",
year = "2017",
note = "Evaluation of teaching: Challenges and promises",
issn = "0191-491X",
doi = "http://dx.doi.org/10.1016/j.stueduc.2016.08.007",
url = "http://www.sciencedirect.com/science/article/pii/S0191491X16300323",
author = "Bob Uttl and Carmela A. White and Daniela Wong Gonzalez",
keywords = "Meta-analysis of student evaluation of teaching",
keywords = "Multisection studies",
keywords = "Validity",
keywords = "Teaching effectiveness",
keywords = "Evaluation of faculty",
keywords = "SET and learning correlations"
}