Skip to Navigation
University of Pittsburgh
Print This Page Print this pages

May 12, 2011

NRC recalculates rankings for doctoral programs

The National Research Council (NRC) last month released recalculated rankings after errors were found in its recent comprehensive report on U.S. doctoral programs.

The report, “A Data-based Assessment of Research Doctorate Programs in the United States,” issued last Sept. 28, covered more than 5,000 U.S. programs in 62 fields at 212 institutions. (See Sept. 30 University Times for three related articles.)

In the initial report, as well as in the revised calculations, Pitt was rated in 38 program areas. While the rankings ranges were adjusted slightly by NRC for virtually every Pitt program, the fluctuations were minor.

Sept.&April pitt only.xls

Click chart to download full size pdf.

The chart on this page compares Pitt’s doctoral program rankings in the September report with the revised report.

NRC rankings are drawn from data gathered in fall 2006 and spring 2007, based on the 2005-06 academic year.

Following the release of the September report, Provost Patricia Beeson told the University Times: “The NRC study reflects an unprecedented collection of data on research doctorate programs in the U.S. using a very complex methodology to try to summarize what these data say. We will be sorting through and interpreting this information for quite some time, but from our initial analysis, the University’s doctorate programs did quite well.”

Beeson declined to comment on the revised report.

NRC revised its report after 34 institutions questioned the data for approximately 450 doctoral programs.

The most common questions, the recalculated report states, centered around faculty characteristics: publications per allocated faculty member; citations per publication; the allocation of faculty, and the measure of interdisciplinarity that used this measure.

The revised report states: “In the course of this process, the NRC discovered four substantive errors. These have been corrected and incorporated into recalculated rankings.”

According to the revised report, those variables that were affected were:

• Average citations per publication. Publications for 2002 used to obtain citations per publication had been mislabeled in all non-humanities fields. Publications for 2002 were corrected, and the “citations per publication” variable (which is averaged over the years 2000 to 2006) was recalculated.

• Awards per allocated faculty member. NRC undercounted honors and awards. Data for this variable were recompiled from faculty lists and the variable was recalculated.

• Percent with academic plans. The response rate to this question, which was calculated from the NSF Survey of Earned Doctorates, varied considerably across programs. NRC agreed that a more accurate measure based on survey data was percent of respondents with academic positions or postdocs, not percent of total PhDs. This variable was recalculated with the changed definition.

• Percent of first-year students with full financial support. This variable had been given the value “0” when a program had no first-year students. Now NRC uses an asterisk to indicate that a program has no first-year students. When no data were reported, there is an “N/D.”

In a departure from traditional single-ordinal rankings comparing programs, each program in the NRC report received an overall rating range (for example, 14-35), as well as ranges of rankings for three dimensions of program quality: research activity; student support and outcomes, and diversity of the academic environment.

According to the NRC report, for each program two illustrations of rankings for overall program quality are given, based on two different methods of discerning what faculty in each field believe is important in a high-quality doctoral program:

• The S- (for survey-based) rankings are based on a survey that asked faculty to rate the importance of the 20 different program characteristics in determining the quality of a program. Based on their answers, each characteristic was assigned a weight; these weights varied by field. The weights then were applied to the data for each program in the field, resulting in a range of rankings for each program.

• The R- (or regression-based) rankings are based on an indirect way of determining the importance faculty attach to various characteristics. (A regression analysis measures the relationship between a dependent variable and one or more independent variables.)

For example, Pitt’s program in anthropology received a rankings range of 11-45 in the initial regression (R-rankings) table.

In the revised table, the program received a 15-43 ranking, with the 15 representing the 5th percentile rankings and the 45 the 95th percentile rankings, or the middle 90 percent.

Viewed another way, the program could claim that it ranked between 15th and 43rd overall (among 82 programs ranked nationally) with 90 percent statistical certainty.

Each program also received separate sets of rankings for research activity; student support and outcomes, and diversity of the academic environment, using the S-rankings formula described above.

The full report is available at http://www.nap.edu/rdp/.

—Peter Hart


Leave a Reply