Skip to Navigation
University of Pittsburgh
Print This Page Print this pages

May 16, 2002

Turning the tables: The importance of student evaluations of faculty varies

For most academic units at Pitt, the term is over, exams are completed, students' grades are in. Now, the tables have turned and most faculty are awaiting "grades" from their students.

As a matter of institutional policy, faculty at Pitt regardless of rank are required to be evaluated by their students at least once a year and commonly more often.

But how much student evaluations of instructors are weighed here in decisions affecting hiring, salary, promotion and tenure depends on many factors, including the discipline, the academic rank of the instructor, the type of course and the discretion of the dean or department chair.

And that's as it should be, said Jack Daniel, vice provost for Academic Affairs.

"The system in place does work," Daniel said. "Units worked with the Provost's office to establish guidelines specific to the unit and subject to the provost's approval. The evaluation forms used, by necessity, have to be modified by the subject matter, and whether it is clinical instruction, laboratory instruction, or classroom instruction, and the kind of classroom instruction, such as large lectures."

The Provost's office periodically urges schools to re-evaluate the surveys they are using to measure students' opinions of teachers, as pedagogical tools and methods evolve. Daniel said the next such nudge will be going out to academic units within the next year.

Daniel headed the Provost-appointed committee that set the ground rules for student evaluations in the early 1990s.

A University-wide plan in 1985 urged the adoption of policies that incorporated student evaluations and peer reviews in instructional assessment, and Pitt's Board of Trustees approved proposals in 1989 that supported adopting a formal policy.

In 1991, then Chancellor J. Dennis O'Connor assigned the Provost's office to work with each unit in developing mandatory student and peer evaluation systems across the University.

In 1993, the University Senate's educational policies committee submitted a recommendation, subsequently approved by Faculty Assembly and Senate Council, that a formal policy be adopted.

The charge to oversee creating and processing student evaluations fell to the Office of Measurement and Evaluation of Teaching (OMET), directed by Carol E. Baker. (See box on this page.) Baker said that Pitt's institutional timeline to establish a system of student evaluations mirrored a nationwide trend.

"In the late 1970s when this office opened, student evaluations were strictly voluntary, at the discretion of the instructor," Baker said, which was typical in American higher education.

Enterprising Pitt undergrads, through the Student Government Board, published some evaluations each term up until the mid-1980s, but evaluations were limited to those that the instructors gave permission to publish. "You can see the limitations to that system, where only instructors who got basically high ratings would consent to make them public," she said.

From the mid-1980s to early 1990s the use of student evaluations in academic decision-making became more acceptable across the country, Baker said, even though their validity is debated by educational researchers to this day. She estimated that at least 90 percent of institutions of higher learning now have some system in place, although many schools use only one form across the board. At Pitt, she noted, the forms have only one common question University-wide (though with slight variations): "Express your judgment of the instructor's overall teaching effectiveness." Other questions are particular to the school or department.

The current student evaluation/peer review policy at Pitt was implemented in spring term 1995. In introducing that policy at a Senate Council meeting in fall 1994, Provost James Maher said, "The only centrally imposed requirement is that the [teaching] evaluation process involve both student and faculty input so that when a department chair or dean is making a raise or salary decision, the peer and the students both have been consulted and it's not an arbitrary thing," Maher told Senate Council at that time. The goal of the system was not conformity of plans across units, Maher added.

A sampling of Pitt units bears this out. In the business and law schools, for example, students evaluate every instructor, every course, every term.

In the School of Social Work, by contrast, tenured and tenure-stream professors are required to submit only one course evaluation to the dean's office per term, while social work adjuncts are evaluated for every course.

The School of Dental Medicine and the School of Nursing have separate evaluation forms for clinical and didactic courses, and the School of Medicine has several evaluation forms that differentiate, among others things, by the class level of the students.

The School of Engineering attaches a set of questions to the standard OMET form that helps the school compile data for program certification or accreditation. Students are asked to judge to what extent courses improved their ability to apply math, chemistry or physics to engineering problems; if courses covered current issues; if courses addressed potential public risks; if courses added to a knowledge of professional ethics, among other questions.

Other programs include tailored questions. The University Honors College asks students to compare courses with non-Honors College courses, and the English writing program asks about the value of instructor's comments on papers and if the course helped them improve their writing.

"The system is flexible by individuals, departments and schools," Baker said. "We even have a form for Semester at Sea."

OMET has prepared some 300 optional questions that a faculty member can designate for the student evaluation; more specialized questions can be accommodated, Baker said.

As for the weight attached to the student evaluations, units also take different tacks.

Ray Engel, interim associate dean for academic affairs in the School of Social Work, said that his school has different measures for evaluating faculty depending on their academic rank. "We are a school that truly believes in the three-part mission of teaching, service and research. In our tenure stream, expectations for faculty are more than to be good teachers, although that's certainly an important part of it. We like to see demonstration of good progress in writing and publishing, field research and scholarship."

Engel said that the school recently modified the student evaluation forms to measure how instructors and courses reflect social values, ethics and field experience.

"We have a lot of part-time faculty and we like to have a good solid core of adjuncts who bring field experience to the classroom," Engel said. Since teaching is adjuncts' primary responsibility, student evaluations are given more weight in decisions to retain adjuncts, he said.

But Engel said new adjuncts are given a grace period with student evaluations. However, "if by the third time an adjunct is evaluated and gets bad reviews, they're not likely to be hired back," he said.

Social Work's Wynne Korr, chair of the school's promotion and tenure committee, said that student evaluations play "a contextual role" in decisions her committee makes. "They're considered important by our committee," Korr said. "But we look at the teaching portfolio, which includes the peer review, the variety of the curriculum, whether the courses are at the bachelor's or master's or practicum level," in addition to non-teaching performance related to research and public service.

"Also, we have the third-year review for our tenure-stream faculty," Korr said. "It's hard to imagine an instructor up for tenure who has had consistently bad evaluations for six years" when final tenure decisions ordinarily are made.

At the School of Law, student evaluations likewise are considered in decisions about the faculty. Dean David Herring said, "I would say the student evaluations are an important factor. I review them carefully and I pass summaries to the Provost's office with my recommendations for tenure or promotion. They're especially important in the evaluation of adjuncts."

An adjunct with very poor evaluations would not be asked back, he said, but evaluations have not been the deciding factor in any instance during his tenure as dean.

For tenure and tenure-stream faculty, demonstrating scholarship carries much more weight in salary and promotion decisions, he said. "We consider our school a home of legal scholarship," he said, although he added that, generally speaking, faculty got high marks from students.

Attilio Favorini, chair of theatre arts, said that to put student evaluations in context, his department looks at "historical patterns for our courses. We've found that there are certain courses in which the teachers consistently get high ratings from students. For example, our Introduction to Performance course is very popular. Whether it's because it's the first time a student has taken that kind of course, or whether it's different in that it features physical exercise and acts as a relief to typical classes — it could be a number of things."

The point is, Favorini said, that it is invalid to assume instructors are more well-qualified when good evaluations tend to come from popular courses. Students might be rewarding personal enjoyment, for example.

One way of honing in on that issue, Favorini said, is to look at the combination of questions that are part of many Pitt forms, including the one for Arts and Sciences: "Would you recommend this course to other students?" and "Would you recommend this instructor to others students?"

"We also pay close attention to students' answers regarding whether the course was more or less demanding than other courses they have taken. If they found the course easy, are they rewarding the instructor for that?" Favorini asked. "I also like to remind faculty the evaluations are students' opinions of teachers, opinions that could be influenced by a number of factors."

He added that the department's peer review process acts as a counterbalance to student evaluations.

Peer reviews take precedence over student evaluations in the economics department, according to department chair Jean-Francois Richard.

"Especially for our junior faculty and our new faculty, the peer evaluation is important in tenure decisions and in helping to fix any problems," Richard said. "A team of two, once or twice a term, evaluates [the junior faculty member] and provides a summary of the feedback that is shared with the faculty member. It is a very interactive process. The faculty as a whole meet for year-end peer evaluations."

Student evaluations on the other hand are used mainly as a tool for improvement, he said. "We do look at [the evaluation question on] the overall effectiveness of the teaching. We like to see improvement there from year to year. But we also look at the evaluations quite carefully for the value of the input," which varies according to the kind of course.

"We take into account, for example, if this is a large undergraduate service class that may be required, because they typically get lower scores. We don't want to penalize anyone for teaching large classes." Smaller classes, in contrast, normally offer more faculty-student one-on-one communication and result in higher evaluations.

"I also see that immediate feedback of a course has limitations," Richard added. "Mathematics is no fun, and a student might evaluate a course with that in mind, but two years later the student sees that mathematics really matters in learning other subjects and appreciates the course and the teacher more."

Lynda Davidson, associate dean at the School of Nursing, said that in her school a better indicator of good teaching than student evaluations is the success rate of students on national exams, which are required for licensing.

"Over the past 10 years, our passing rates on the national exams have run between 89 and 92 percent, with the national average slipping from about 90 percent 10 years ago to about 85 percent recently," Davidson said. "We look at these rates very carefully every year with our curriculum in mind. Are there weaknesses we can address? We want to prepare well-rounded, well-educated individuals, but we want our students to be able to practice nursing."

Past president of the University Senate Nathan Hershey, who has expressed concerns about the instruments and process of student evaluations in the past, said his objections were still valid. Hershey said that student evaluations could be affected by whether the student was a major in the subject. He also questioned how well students in professional programs could evaluate whether courses will be useful before they are practicing the profession.

"There had been concern expressed by some people that [evaluating teachers] was sort of a popularity contest, and some faculty members played to the audience to get good evaluations," Hershey said. "But when I looked into pursuing the issue, nobody appeared to be very interested."

Dean of Arts and Sciences N. John Cooper said evaluations by peer faculty, which also are required institution-wide, are intended to balance evaluations by students. "When looking at the teaching achievements of faculty colleagues for review purposes, one of the key responsibilities of the peer evaluation is to look not only at the content of the courses but also to provide an overall picture of the teaching, which ought to rectify any student criticism that's based on tough grading standards," Cooper said. "That's one of the reasons the peer evaluation is such an important part of the teaching portfolio that is considered for tenure and promotion."

Cooper said he didn't think instructors were "playing to the audience" in exchange for positive evaluations.

"I realize that faculty get concerned that firmer grading standards may translate into weaker student evaluations. But I have, many times, seen very good evaluations in which students have written things like, 'The professor makes us work very hard, but it's a wonderful course.'"

OMET's Baker said that there is a second safeguard to negative student evaluations besides the peer review process. "Instructors should supplement the evaluation forms with information about their teaching philosophy, their methods, their course goals, the thinking behind creating the course syllabus, why they chose certain assignments" in short, their teaching dossier.

Baker also pointed out that some academic units use the student comments form, typically four questions with two on teaching and two on the course, as formative (designed to help the instructor improve) and some use them as summative (designed to be considered by the department chair or dean).

"We recommend that these be used for the instructor's improvement only," she said. "The comments are often personal remarks that can't be quantifiably analyzed, and we don't want chairs and deans to use a few negative comments in deciding promotion or salary increases."

–Peter Hart & Bruce Steele


Leave a Reply