Skip to Navigation
University of Pittsburgh
Print This Page Print this pages

August 30, 2007

Complaints about U.S. News rankings grow

Like swallows flying to Capistrano each March, every August officials at higher education institutions scramble to trumpet — or to downplay — their standing in the annual U.S. News & World Report’s “best colleges” rankings.

University administrators recognize the inevitable popularity of the annual rankings among college-shopping students and their parents — the college guide and its counterpart, “America’s Best Graduate Schools,” published every April since 1994, perennially are the top-selling issues. But school officials have been griping about the magazine’s methodology since the first best colleges rankings appeared in 1983.

The chief problem in the seven-category methodology, critics say, is the unscientific peer assessment, or reputational, survey that asks administrators to rate hundreds of peer institutions. The peer assessment accounts for the highest value — 25 percent — in a school’s overall score.

But the other main indicators — retention rates, faculty resources, student selectivity, financial resources, graduation rate performance and alumni giving rates — all also have come under fire as “apples and oranges” comparisons.

Alumni giving rates, for example, take into account the percentage of alumni giving, but not average amounts donated, critics argue, allowing institutions to solicit nominal amounts to pad their giving rate.

Student selectivity, including acceptance rates, often depends on an institution’s mission, and thus can penalize public and state universities whose missions compel them to take some less-gifted students and a higher proportion of applicants.

Critics also complain about the comparison of students’ standardized test scores when a growing number of institutions are not requiring them for admission, and factoring in high school class ranks when more and more high schools are not reporting them.

This year, the griping over the U.S. News rankings has grown louder, and several moves are afoot to develop alternative college data summaries, according to the May 25 Chronicle of Higher Education.

The Chronicle noted that:

• The Association of American Universities (AAU), of which Pitt is a member, announced in May that member institutions on a voluntary basis would begin collecting data on graduation rates, the time required to complete a degree, post-graduation career statistics and cost estimates.

But AAU President Robert M. Berdahl in a prepared release acknowledged that such a reporting system is still a few years away because of the difficulty of defining and collecting such data. For example, current data on graduation rates are faulty because they do not account for transfer students, an AAU spokesperson noted.

• In June, the National Association of State Universities and Land-Grant Colleges, partnering with the American Association of State Colleges and Universities, circulated a template to their 650 members for publishing educational data online, The Chronicle reported.

• The National Association of Independent Colleges and Universities has followed suit with a similar online template.

The goal is to provide data that would allow potential students to compare colleges by enrollments, class size, majors offered and other factors, all while avoiding a ranking system.

• In April, a group of administrators from national liberal arts institutions circulated a letter urging college presidents to refuse to use their institutions’ U.S. News ranking in publicity materials and to boycott the magazine’s survey request for peer assessments, which account for the highest value of a school’s overall scores as ranked by U.S. News.

U.S. News editor Brian Kelly defended the rankings as an impartial accumulation of accepted educational data. “[These rankings] are from an independent, third-party journalism point of view,” Kelly said. “We have no ax to grind.”

The two “pillars” that U.S News’s methodology rests on, Kelly said, are quantitative measures that education experts have accepted as reliable indicators of academic quality, and the magazine’s unbiased view of what high-quality education means.

But Kelly did acknowledge a steady erosion over time of responses from colleges, particularly to the reputational survey, which has fallen from a high of 68 percent responding in 2000 to 51 percent responding to the 2008 survey.

*

Pitt’s administration also has raised objections periodically to the U.S. News rankings. Recently, in response to an alumnus’s questions, Provost James V. Maher said the biggest danger is allowing the rankings to influence sound educational policies.

“Like most of this country’s major universities, we have very mixed feelings about the U.S. News & World Report rankings,” Maher stated in a June posting on the Provost’s office web site. “We cannot ignore them because important partners (like potential students and their families and potential faculty members and their families) take them seriously.

“On the other hand,” the provost continued, “we are struggling to serve our students and society without spending more money than is needed, so we would regard it as irresponsible if we were to allow the U.S. News rankings to force us to spend money in a way that we regard as otherwise inappropriate.”

While Pitt provides educational data for the annual rankings, internally the University’s response has been to examine each component that U.S. News uses for its “programmatic merit” rather than as a way to rise in the rankings, Maher stated.

For example, he said, because Pitt administrators believe that some subjects are better taught in small sections, the University has increased the number of sections in writing courses and limited the maximum class size in sections of other “judiciously chosen courses.”

These are factors evaluated by U.S. News, regardless of the subject of study, but that misses the point, the provost said.

“The question is not whether there are any courses taught in large sections but rather how many such courses there are,” Maher stated. “U.S. News, by focusing on sections rather than courses, has had to pick a section size, and then by choosing a section size of 50 as their definition of ‘large’ they have severely disadvantaged universities that are trying to keep the sections of these courses from becoming too large.”

The provost cited Pitt introductory science courses that typically have about five sections of 80-150 students. In contrast, other universities may have one section of up to 1,000 students, he noted.

“We think that we can teach our students better by keeping our section sizes down by having extra sections, and we are not willing to change this,” Maher stated. “But to U.S. News, which counts sections of enrollment over 50 and reduces ranking the more such sections it finds, we have about five times as many large sections as we would have if we emulated those [other] universities.”

Other components in the U.S. News methodology are counterproductive, he said, such as attaching heavy weight to how much money a university spends in delivering its programs, with the more money spent the better the ranking.

“In no other endeavor that I’m aware of would one say that if two institutions produced the same product for significantly different costs the institution that spent more money producing the product was the better institution,” Maher maintained.

“Rankings do matter because the value of a Pitt degree for alumni as well as prospective students can be hurt by bad rankings, but we cannot allow the rankings to skew the quality or cost of the education we offer,” Maher concluded.

—Peter Hart

Filed under: Feature,Volume 40 Issue 1

Leave a Reply