Skip to Navigation
University of Pittsburgh
Print This Page Print this pages

February 21, 2008

Multi-year plan to assess student learning continues

Responding to a national push for more accountability in education, Pitt is in the midst of an institution-wide plan for assessing student learning.

Round 2 of that multi-year plan is coming to a close. The project, launched in November 2006 by the Council of Deans, aims to evaluate the success of Pitt’s educational programs by documenting student learning outcomes over time.

During the first phase of the initiative, which ended last March, deans, program directors and campus presidents submitted documentation of their assessment process for each degree and certificate program, both graduate and undergraduate, according to Patricia Beeson, vice provost for Undergraduate and Graduate Studies, who is the Provost’s office coordinator for the initiative.

Phase 2 includes developing assessment plans for undergraduate general education requirements (GERs) by March 3.

The effort to measure results of the plans is expected to last through the 2009 academic year, Beeson said.

“The guiding principles of this effort are that the assessment be faculty-driven, because faculty know best what their students should know; that it be comprehensive; that it be meaningful to ourselves and to others, and that it be sustainable, that is, embedded in the annual planning process and seen to drive change,” Beeson said.

(More information is available at the student learning assessment web site: www.provost.pitt.edu/assessment/index.html.)

“By March 2007, every department and program turned in something. There was varying quality, in part because we’re all learning how to do this,” Beeson said. “We couldn’t just say, ‘Here’s exactly what you need to do. Here’s what your learning outcomes should be. Here’s how you should be thinking about assessing them.’ Certainly, faculty have always been thinking about student outcomes. But we haven’t traditionally thought about that when we’re evaluating our programs. There’s a set of goals that are common throughout the University, such as being able to think critically, communicate clearly, use quantitative reasoning and so forth. But you also expect that a student who graduates in engineering has a different set of skills than someone in a humanity.”

So, in addition to asking whether a program is meeting University-wide goals, departments now need to ask program-specific questions, she said.

“For example, asking: ‘Are the econ and the math programs meeting the expectations they set for their students in those majors?’” Beeson said. “It’s more from the bottom up and it’s looking at a different dimension of program evaluation.”

Traditionally, programs undergo comprehensive reviews every 5-10 years, and include a self-study evaluating the quality of the faculty and the curriculum, Beeson said. “Also, people from the outside visit and talk to the faculty and look at the materials they have put together, look at the resources, the lab space, the research, the courses, what the enrollments are — all these inputs, if you will. This student learning outcomes information I think will be one more piece of information that you add to that.”

The impetus to undertake the student learning assessment process is a reflection of a national shift taking place over the past decade or so, she said.

“Nationally, we see the concerns that motivated the No Child Left Behind policy, the need for more transparency and accountability to the public,” Beeson said.

In addition, she said, accrediting agencies — including the Middle States Commission on Higher Education, which evaluates Pitt every 10 years — have required that universities include some assessment of student learning outcomes.

Efforts are being made to keep assessments of student learning in line with ongoing accreditation efforts made by various academic units, Beeson noted.

“When I look back at our self-study done in 2001 in preparation for our last accreditation in 2002, there was a lot of reporting on student assessment and student outcomes. So, at the University level, we’ve done things like assessing writing competence of our students on a regular basis. Or basic and quantitative reasoning is another example. We regularly survey our students to get their feedback, also. So this process is building on that,” Beeson said.

(For some examples of department- and school-wide learning assessment efforts, see related story this issue.)

The Provost’s office adopted a five-step documentation matrix that has been used at the University of Virginia. The five categories in the matrix are:

• 3-5 learning outcomes: What will students know and be able to do when they graduate?

• Assessment methods: How will the outcome be measured? Who will be assessed and how often?

• Standards of comparison: How well should students be able to do on the assessment?

• Interpretation of results: What do the data show?

• Use of results/action plans: What changes were made after reviewing the results?

“Last year, by March 1, all degree and certificate programs and departments had to complete the first three [categories] as part of their annual planning document,” Beeson said.

For instance, she said, an English department learning outcome could be: By the end of sophomore year, students on an exam can describe and explain literary and cultural theories on English literature.

“Or, you could look at it another way,” Beeson said. “Our English department has a senior seminar. The faculty who are teaching the senior seminar meet annually, as they’ve always done, to discuss the students as a whole. They might say: ‘You know, it doesn’t seem like the students were as good at this particular type of analysis as I would have hoped.’”

Based on that discussion, she said, the department’s curriculum committee would look at ways to improve the curriculum.

Beeson said the question for the department becomes: How can we adjust what we’re teaching in the other courses that are building up to the senior seminar to help establish the type of analysis we wish to improve?

By March 3 departments must have taken at least one of the outcomes, interpreted the assessment results and described what they are going to do with the results, in other words, complete categories 4 and 5 of the matrix, she said. “The most important parts are: What do we expect of our majors, and What are we going to do with our curriculum to take that assessment into account?” Beeson said.

Simultaneous with the second-phase evaluation at the department level, schools this year were asked to fill in matrix categories 1-3 for their undergraduate general education requirements, Beeson said. Next year, that process will work toward categories 4 and 5 for GERs, she added.

One result of the process University-wide, Beeson said, is that “whenever a new program is proposed to the University Council for Graduate Studies or to the provost’s advisory committee for undergraduate programs, they have to include a matrix like this, because it’s part of how you’re going to assess whether your progress is successful in achieving those goals you set.”

Beeson acknowledged that the learning outcomes assessment process has been time-intensive for the faculty, especially early on. “We had a quick start last year. There wasn’t a big gap between the [November 2006] Council of Deans report and the day the first results were due [March 1, 2007],” she said.

That quick pace was motivated by the fact that Pitt was facing a June 2007 deadline to file a mid-point evaluation report with Middle States.

“Part of it was also: Let’s get something, let’s get started, knowing that there was going to be revision. But we needed to get people to start thinking about this seriously. I think we achieved that goal. Most departments had already been thinking about this, so it was just documenting what they’d already been doing. For others who maybe hadn’t been thinking about doing it this way, there was a little more work,” Beeson said.

The professional schools, which already have to report student assessment outcomes to their respective accreditors, were able to substitute their accreditation reports for the matrix documentation, she noted.

“We really tried to make this something that will be useful without being incredibly burdensome,” she said. “That’s why we said 3-5 learning outcomes. We don’t want you to identify 50 things you’re going to do. And we’ve said assess at least one every year; you don’t have to do everything every year. I am sympathetic to the fact that there is a cost [in time] that’s borne in the beginning through hard work, trying to think about what it is we’re trying to get our students to know.”

The Provost’s office has set up support systems for faculty who are feeling their way through the process, she added. “We developed that web site and we keep adding to it when we find out from programs or schools what they don’t understand or would like help with,” Beeson said.

In addition, staff in the Center for Instructional Development and Distance Education are available to advise departments, and the Office of Management Information and Analysis is developing online survey capabilities to piggyback with surveys already in place, she said.

“We survey alums and ask institutional-level questions, but individual programs now can add their department-specific questions for alums to answer. Certain programs might want to know how successful their alums are, for example, as one of the outcomes they assess,” Beeson said.

As to her responsibility as the Provost’s office liaison, Beeson said, “What I do is look at the goals and the assessment methods and see if they match. I’m not giving that much detail to each program, because we’ve got hundreds of them. But each department chair is supposed to do the initial review, then it goes to the dean’s office. Hopefully, by the time we get it, it’s gone back and forth a lot. I’m just here looking at whether I need to give the dean some direction, and whether there are some inconsistencies across the programs in a school. The largest role is the faculty in the departments; the deans are simply giving feedback, giving direction or giving ideas from other departments and I’m doing the same for the deans.”

—Peter Hart


Leave a Reply