Skip to Navigation
University of Pittsburgh
Print This Page Print this pages

September 30, 2010

NRC doctoral rankings released

Pitt is included in a just-released, but long anticipated, doctoral program report from the National Research Council (NRC), part of the National Academies.

The report, “A Data-based Assessment of Research Doctorate Programs in the United States,” issued Sept. 28, covers more than 5,000 U.S. programs in 62 fields at 212 institutions. Pitt is rated in 38 program areas (Download large chart).

Download larger chart.

NRC has assessed the quality of research doctoral programs in U.S. universities twice previously, in 1982 and 1995. The methodology of the current study, however, represents a significant departure from that of the earlier reports.

The methodology also is not without controversy, primarily because of the age of the report’s data, all of which pertain to the academic year 2005-06. Data were collected in late fall 2006 and spring 2007 via surveys of universities, programs, faculty and, in a few subject areas, students.

For the rankings, NRC used survey data on 20 variables related to scholarly productivity of program faculty, effectiveness of doctoral education, research resources, demographic characteristics of students and faculty, resources available to doctoral students and characteristics of the doctoral program.

The report covers such characteristics as faculty publications, grants and awards; student GRE scores, student financial support and employment outcomes; and program size, median time to degree and faculty composition. Measures of faculty and student diversity also are included.

NRC used four campus-based data collection instruments to derive the ratings. An institutional questionnaire collected institutional data, and a program questionnaire asked about programs and faculty participating in the programs.

Faculty members were surveyed with a faculty questionnaire, and a subset also were surveyed in the rating-of-program-quality questionnaire (what NRC refers to as the “anchoring study”).

In a departure from traditional single-ordinal rankings comparing programs, each program in the NRC report received an overall rating range (for example, 14-35), as well as ranges of rankings for three dimensions of program quality: research activity; student support and outcomes, and diversity of the academic environment.

According to NRC, this system is designed to countermand the inherent differences among raters, statistical uncertainty and variability in year-to-year data.

NRC cautioned, “These illustrative rankings should not be interpreted as definitive conclusions about the relative quality of doctoral programs, nor are they endorsed as such by the National Research Council. Rather, they demonstrate how the data can be used to rank programs based on the importance of particular characteristics to various users — in this case, to faculty at participating institutions.”

The approach used to generate the ranking ranges incorporates both data on program characteristics and faculty values, the report explained.

“For each program, the study analyzed data on 20 characteristics, ‘weighing’ the data according to the characteristics valued most by faculty in that field. Thus, the weights on which the rankings are based are derived from the faculty in each field. The rankings are given in broad ranges rather than as single numbers, to reflect some of the uncertainties inherent in any effort to rank programs by quality.”

According to the NRC report, for each program, two illustrations of rankings for overall program quality are given, based on two different methods of discerning what faculty in each field believe is important in a high-quality doctoral program:

The S- (for survey-based) rankings are based on a survey that asked faculty to rate the importance of the 20 different program characteristics in determining the quality of a program. Based on their answers, each characteristic was assigned a weight; these weights varied by field. The weights then were applied to the data for each program in the field, resulting in a range of rankings for each program.

The R- (or regression-based) rankings are based on an indirect way of determining the importance faculty attach to various characteristics. (A regression analysis measures the relationship between a dependent variable and one or more independent variables.)

According to the report, “groups of randomly selected faculty were asked to rate the quality of a sample of representative programs in their field. Based on the sample program ratings, weights were assigned to each of the 20 characteristics using statistical techniques; again, these weights varied by field.”

These weights were applied to the data about each program, resulting in a second range of rankings, the report states.

“Each approach yielded a different set of weights, and therefore resulted in different ranges of rankings. In the S-rankings, for example, faculty in most fields placed the greatest weight on characteristics related to faculty research activity, such as per capita publications or the percentage of faculty with grants. Therefore, programs that are strong in those characteristics tend to rank higher.  Such characteristics were also weighted heavily in the R-rankings for many fields, but program size (measured by numbers of PhDs produced by the program averaged over five years) frequently was the characteristic with the largest weight in determining these rankings.”

The degree of uncertainty in the rankings, according to the NRC report, “is quantified in part by calculating the S- and R-rankings of each program 500 times. The resulting 500 rankings were numerically ordered and the lowest and highest five percent were excluded. Thus, the 5th and 95th percentile rankings — in other words, the 25th highest ranking and the 475th highest ranking in the list of 500 — define each program’s range of rankings.”

For example, Pitt’s program in anthropology received a rankings range of 11-45 in the regression (R-rankings) table, with the 11 representing the 5th percentile rankings and the 45 the 95th percentile rankings, or the middle 90 percent. Viewed another way, the program can claim that it ranked between 11th and 45th overall with 90 percent statistical certainty.

Each program also received separate sets of rankings for research activity; student support and outcomes, and diversity of the academic environment, using the S-rankings formula described above.

The data are reported in Excel spreadsheets that list the rated programs in a given field alphabetically. The full report is available at

—Peter Hart

Filed under: Feature,Volume 43 Issue 3

Leave a Reply