Skip to Navigation
University of Pittsburgh
Print This Page Print this pages

April 14, 2011

Provost discusses efforts to evaluate student learning

beesonProvost Patricia Beeson outlined her office’s efforts to assess student learning for Faculty Assembly last week.

At the February Assembly meeting, Senate President Michael Pinsky had called for a review of evaluation processes that look at whether Pitt is meeting its educational goals. (See March 3 University Times.)

That issue dovetails nicely with the University’s current efforts directed toward reaccreditation, according to Beeson, who reported at the April 5 Assembly meeting, speaking on “Assessing Student Learning and Reaccreditation 2012.”

Pitt is in the midst of a two-plus-year process of self-evaluation to meet the requirements of the Middle States Commission on Higher Education, the accrediting arm of the Middle States Association of Colleges and Schools. The University undergoes a reaccrediting evaluation by the Middle States Commission every 10 years; the current evaluation is expected to run through fall 2012.

In its self-study, Pitt has chosen the theme “Using a University-wide Culture of Assessment for Continuous Improvement,” which includes major components on assessment of the student experience, assessment of institutional effectiveness and demonstration of compliance with Middle States standards.

(For a detailed overview of Pitt’s Middle States reaccreditation process and methodology, see Sept. 30 University Times.)

“This is very much related to this topic of how do we assess the quality of our programs, and how do we assess them in a way that allows us to continue to develop and improve upon those programs?” Beeson said.

“Our process of assessing student learning is really the most recent evolution of an ongoing process” that dates back many years, she said.

“For decades we’ve had processes of program evaluations, where we do self-studies and we evaluate the ‘inputs’ of the knowledge production process,” Beeson explained. “We look at the quality of the faculty, we examine the curriculum, we look at the facilities, we look at other aspects of the academic program. We often bring in outside evaluators to help us understand how our programs compare to the standards of that particular discipline.”

Those processes also traditionally included measuring “outputs,” such as student placement after graduation, retention and graduation rates and student satisfaction as measured by a number of surveys, she noted.

It’s only in the last 10-15 years that student assessments included using information about what the students learned to inform the evaluation of a program and its curriculum.

“For example, many of the first professional degree programs — law, business, medicine, dental medicine — started to look more carefully at the outcomes of their licensure exams,” Beeson said.

“Around 2006 and into 2007, the Council of Deans decided it would be important to establish guidelines regarding our institutional expectations concerning the assessment of student learning. While the timing was good for us institutionally because we were already moving in that direction, it was also a good time for us in terms of the national conversations that were going on and the resulting requirements for our reaccreditation,” Beeson told Faculty Assembly.

At about the same time, Beeson, said, the federal Department of Education was proposing national standards for higher education, directed toward the question: Are there certain things that students should learn by the time they graduate, independent of what institution they attend?

“They went so far as to propose standards not unlike the No Child Left Behind standards. They proposed extensive public recording requirements on everything from the cost of attendance to the value-added of the institution. That was going to be measured by student improvement on some sort of nationalized standardized test,” she said.

On the face of it, that may have seemed like a good idea, but in fact it is not, Beeson maintained, because a standardized test does not take into account different institutional missions. In addition, the value-added component for attending a particular institution is flawed when the starting point for entering freshmen is already at a high level of achievement, as is the case with a large majority of Pitt’s entering freshmen; there is little room to improve on the same standardized test taken as a graduating senior, she said.

In contrast, the guiding principles of Pitt’s efforts are that student assessment be faculty-driven, comprehensive, applied in a meaningful way to programmatic and curricular improvement and be sustainable, that is, embedded in the annual planning process, she said. Pitt’s internal guidelines now include the requirement that every program and school at every Pitt campus must submit a student learning assessment report every year, she noted.

“We believe that the question of assessing student learning is most appropriately examined by the departments and the programs at individual institutions,” Beeson said.

“It’s the faculty that have both the knowledge of the subject and the regular contact with students, so they’re in the best position to explore success and concerns. Equally, it’s the faculty at the individual institutions that can use that information to improve their curriculum,” she said.

“Our approach to assessing student learning is based on the belief that the local level is the right place to determine what students should learn and assess whether or not they’re learning it. And finally, we use the results of those assessments to improve the academic program and to make sure that the programs are constantly keeping current on what students need to know in that particular discipline if they’re going to be successful after they graduate.”

(For a related story on Pitt’s student assessment strategies, see Feb. 21, 2008, University Times.)

In other Assembly business:

Pinsky announced that the sustainability subcommittee, part of the Senate plant and utilization committee, is sponsoring a presentation on Pitt’s sustainability efforts, led by Joseph Fink, associate vice chancellor for Facilities Management, at 3 p.m. April 20 in 4127 Sennott Square.

John Baker, chair of the Senate elections committee, announced that electronic balloting for membership on the 15 Senate standing committees will be held April 19-May 1. (See slate of committee candidates.)

—Peter Hart


Leave a Reply