Skip to Navigation
University of Pittsburgh
Print This Page Print this pages

September 30, 2010

Doctoral program report has good & bad points, provost says

Provost Patricia Beeson

Provost Patricia Beeson

A new report released this week on rankings of national doctoral programs includes 38 of Pitt’s programs (see chart). The National Research Council (NRC) report, “A Data-based Assessment of Research Doctorate Programs in the United States,” has good points and bad, according to Provost Patricia Beeson, who as vice provost for graduate studies oversaw Pitt’s data collection for the NRC report.

Beeson told the University Times, “The NRC study reflects an unprecedented collection of data on research doctorate programs in the U.S. using a very complex methodology to try to summarize what these data say. We will be sorting through and interpreting this information for quite some time, but from our initial analysis, the University’s doctorate programs did quite well, with the majority showing improvement even in comparison to the very different system used in the last [1995] NRC study.”

There are several aspects of the report’s methodology that need to be considered, however, she said. “The NRC took on a very important project, and set for itself an almost an impossible task, in that what they want to do is use quantitative, objective data. They took the right approach: Get the data and evaluate data to see what they have to say about the relative strengths of doctoral programs. However, how they chose to use these data is subjective. They have collected some 50 variables, although they used only 20 [in this analysis],” Beeson said.

“I might have chosen some other variables. For example, they chose as their research measure the percent of the faculty who have grants. But it could be that they have a $10,000 grant, it could be a $1 million grant. It was just a yes/no question. It illustrates again that what’s important in a doctoral program may be different from one person to another, so there is no one single set of measures that is definitive,” she said.

“How you rank a particular program depends on what’s important to you,” she said. “They’ve chosen two ways we might think about what’s important, one of them by asking faculty: ‘What do you think is important?’ and then using that they created the ‘S-rankings.’ The other one they said to the faculty, ‘Which programs do you think are good?’ and then they tried to statistically infer what the weights would have to be to come up with that ‘R-ranking.’ Those are what we would call more ‘revealed preferences’ for what’s important.”

Beeson also noted that the universities chose which programs to supply data for and which ones not to enter into the study.

“Take nursing. There are 55 programs that are ranked, but there are 120-130 that actually grant PhDs. We’re rated very highly there. But that means we’re up at the top of a very select group, because there are another 70 or so that didn’t even bother to put the information in. The same is true with most of these programs. The number of programs evaluated are just a subset of all the doctoral programs in that field,” she said.

Pitt entered most programs that were eligible under NRC guidelines. Some smaller programs did not qualify to be assessed in the report. “You had to have awarded a certain number of doctoral degrees over a certain amount of time, for example,” she said. “There were a few other areas with overlapping programs that we had to decide which one to put them in.”

The fact that the study’s data are five years old is a concern for everyone who uses the report, Beeson maintained.

“The University of Pittsburgh has changed dramatically since 2005 and we’d like to see that captured. The data were collected from the faculty who were in place in 2005, a snapshot of our faculty at that date. And if you look at the graduate students and at the ‘median time to degree’ measure, that median time to degree is for students who started a long time before that, because they had to have graduated by then. So there are other things that are reaching back even further than 2005,” she noted.

“Do we wish the report is more current? Of course we do. The report tells us how far we advanced between 1995 and 2005-06 but can’t tell us how we’ve advanced in the past five years. That said, it’s not that’s it’s useless information, because we can use it to help us to continue to build strong programs here. When the data were first collected, probably 30 or 40 of the AAU schools shared their data and we’ve already used that to compare some of our graduate programs. We were able to look at sizes of graduate programs, fellowships that we offer, how much support we were providing for graduate students, and get some sense of where we were. And as a result of that we did do some more investing in our graduate fellowships,” Beeson said.

Another positive feature of the report is that the rankings are posted as ranges, rather than single numbers, she said.

“When you look at the U.S. News rankings, and they say one program is No. 15 and another is No. 16 and another is No. 17, are they really different? It probably means  that No. 15 is strong in this area and No. 17 is strong in that area, so it goes back to what weights you attach to those different things,” Beeson said.

“This report provides a variety of ways of thinking about things from the standpoint of a student looking into a graduate program. If what matters to you is placement — getting a job in an academic setting — then you don’t need to look at the overall rankings, but to look at the data on placement,” she said.

“But what makes this study of value is that you can drill down to the underlying data and we’ve already started looking at that. We can look at that for each of these 20 variables and how we compare with other institutions. And not just 20, because there are data on the 50 or more variables that were gathered,” the provost noted.

“We think it’s important for our students to have support that’s sufficient that they can devote themselves to their studies. We can compare with other institutions. We did pretty well on those support measures, in fact very well, and that was reassuring to us.”

—Peter Hart

Filed under: Feature,Volume 43 Issue 3

Leave a Reply