Skip to Navigation
University of Pittsburgh
Print This Page Print this pages

March 16, 2017

University Senate Matters

Senate plenary session: The role of research metrics in faculty evaluation

“Not everything that can be counted counts, and not everything that counts can be counted.” This aphorism, coined by sociologist William Bruce Cameron in 1963, is especially apt today amidst the steadily rising tide of big data. Powerful analytic tools enable us to tap this rich data stream and evaluate more and more things using quantitative metrics.

For example, it can be tempting to measure scholarly productivity by counting publications, citations and grants, especially when advanced software can display results in compelling data visualizations. On the other hand, there is concern that institutional embrace of such quantitative metrics may eclipse more qualitative evaluations, such as peer review, that traditionally have anchored judgments regarding scholarly impact. As a recent scientometric study observes, “One of the most discussed issue[s] in research evaluation and policy literature is about the agreement between peer review and bibliometric indicators.”

The issue is relevant to us at Pitt, where analytic software is available to measure scholarly research impact and other factors at the individual and unit level. On March 29, the Pitt community will have an opportunity to explore these issues with two experts in metrics and scholarly communication.

Diana Hicks, who specializes in metrics for science and technology policy, is a professor in Georgia Institute of Technology’s  School of Public Policy. She was the first author on “The Leiden Manifesto for Research Metrics,” published in Nature in 2015, which outlines 10 principles to guide research evaluation. Her work has been supported by and has informed policymakers in the U.S., Europe and Japan and has advised the Organisation for Economic Co-operation and Development and the governments of Flanders, the Czech Republic and Sweden on national research evaluation systems.

Cassidy Sugimoto is an associate professor at Indiana University-Bloomington’s School of Informatics and Computing. She conducts research in scholarly communication and scientometrics, examining the ways in which knowledge producers consume and disseminate scholarship. She has co-edited two books and has published 70 journal articles on this topic.

How have other institutions been navigating the challenges associated with development of advanced scholarly metrics? Some schools, such as the University of Virginia and Columbia, have signed the San Francisco Declaration on Research Assessment. Others, such as Georgetown, have conducted their own internal data validation exercises to assess the utility of bibliometric tools. Last April speaker Sugimoto led her school’s Faculty Council in adopting a “Policy on Faculty Scholarly Activity Systems.”

That policy is informed by five foundational principles adapted from The Leiden Manifesto:

• Systems used by faculty and administrators should acknowledge and take into account the heterogeneity of disciplines by making coverage transparent and including field normalized indicators.

• Quantitative indicators generated within these systems should be used to supplement rather than supplant other forms of review, such as peer review.

• The structure, data and use of the system should align with the values of the institution and not incentivize behavior incompatible with these values.

• Systems should provide data that are accurate and can be made available for validation.

• Data about individual faculty members should be made available to those faculty members.

In framing the discussion of scholarly metrics within a wider national and international context, speakers Sugimoto and Hicks will lay groundwork for plenary session participants to consider other institutions’ strategies in this area and reflect on questions such as:

• How do disciplinary differences complicate the task of developing responsible approaches to the use of scholarly metrics?

• What are the dynamics involved in using metrics to assess productivity of individual faculty, versus whole departments or schools?

• Are there shared governance issues implicated by incorporation of scholarly metrics into institutional resource allocation decisions?

• What are best practices for striking the appropriate balance between quantitative metrics and qualitative judgments in assessments of scholarly productivity?

The plenary session will include remarks from Chancellor Patrick Gallagher and Provost Patricia E. Beeson. A panel composed of Steve Wisniewski, vice provost for data and information and professor of epidemiology; Sanjeev Shroff, distinguished professor and Gerald McGinnis Chair in Bioengineering; and Gordon Mitchell, assistant dean of the University Honors College and associate professor of communication, will respond to the two main speakers. Of course, there will also be time for questions from the floor.

Join us on March 29, noon-3 p.m., in the William Pitt Union Assembly Room as these two experts expand our knowledge of issues surrounding data, peer review and research evaluation and our local panel provides insight on practices at Pitt.

Comments or questions about the plenary session can be submitted through the box on the Senate website.

Robin Kear is vice president of the University Senate and faculty librarian in the University Library System. Gordon Mitchell is assistant dean of the University Honors College and an associate professor of communication.


Leave a Reply