Skip to Navigation
University of Pittsburgh
Print This Page Print this pages

September 14, 2006

SAT scores drop here, nationally

For the first time in three decades, SAT scores nationally declined sharply. The scores of entering Pitt freshmen declined as well, but more modestly.

But the reasons for the decline are in dispute, as is what the decline says about the quality of students.

The SAT test was revamped in March 2005 to include a third section on critical writing that added 45 minutes to the testing time and increased the total maximum score to 2400, up from 1600 in previous years.

In addition, the math section included more difficult problems from third-year algebra, and the traditional verbal section (now called “critical reading”) was re-structured, dropping the analogies component.

Nationwide, average composite scores on the math and critical reading sections dropped seven points from 1028 for freshmen entering college in 2005 to 1021 for 2006’s freshmen. The last time scores dropped as precipitously in one year was 1975, according to the College Board, which designs and administers the SAT.

At Pitt, the average SAT score of freshmen who entered in fall 2005 was 1234 (612 verbal, 622 math). That dipped to 1229 (609 critical reading, 620 math), a net drop of five points, said Betsy Porter, Pitt director of Admissions and Financial Aid. “So, we did better than the seven-point drop nationwide,” she said. “And we did considerably better compared to high school students across Pennsylvania — where the vast majority of our students come from.”

State scores were off 11 points from the previous year’s composite average (from 501 verbal to 493 critical reading, and from 503 to 500 math).

The College Board defended the new test’s validity in press materials released together with the latest scores on Aug. 27.

“When a new test is introduced, students usually vary their test-taking behavior in a variety of ways and this affects scores,” stated Gaston Caperton, president of the College Board.

According the Caperton, the leading variable in test-taking patterns was a decrease in the number of students choosing to take the test a second time. Students who re-test typically increase their combined score on the math and verbal tests by 30 points, which helps to hike the national averages, the College Board stated. Most colleges accept the highest scores for each test component, regardless of the number of times the test is taken.

Moreover, the drop in scores is not particularly dramatic, Caperton claimed. The two-point drop in the national average mathematics score represents approximately one-fifth of one test question on the SAT; the five-point drop in the national average critical reading score represents approximately one-half of one test question, he stated.

The College Board also maintained that its research analysis indicated that student fatigue due to the longer test was not a factor in lowered scores. In other words, their message is: Don’t blame the test itself.

But Pitt’s Porter remains unconvinced. “Typically, overall average scores from year to year may fluctuate a point or two, maybe three, so, yes, the seven-point national drop is a significant decline,” she said.

“I got wind of this in the spring when lower scores were showing up among our applicants, and I contacted people I know at Penn State and Temple and they were showing the same thing. So, I knew it wasn’t just us,” she said.

“It’s difficult to conclude that in the span of one year students are less prepared or less college-ready,” she said. “The strong temptation is to look at the test itself. There were changes from the previous year’s test: Critical reading test questions changed and they added some harder ones. There is the issue of student fatigue, and students not wanting to re-test after going through the experience or because of cost.”

The price of the test jumped from $28.50 for the old SAT to $41.50 for the new, longer version, which dissuaded some re-testers, she noted.

“So you’ve got strength of test, cost and stress all contributing to lower scores,” Porter said.

While nationally 3 percent fewer (44 percent of the total of about 1.5 million) exam-takers re-tested this year compared to last year, at Pitt the change was less significant. “For the fall of 2005, 439 applicants only took the test once, while this year 494 applicants only tested once. The applicant pools were very similar at 18,000-plus, so that’s not a big difference.”

While some colleges are making the SAT optional, Porter said Pitt will continue to require applicants to take the test. “It’s an important standardized measuring stick, but for us it is only one of many factors in admissions decisions,” Porter said.

Other factors include class rank, difficulty of the high school curriculum, letters of recommendation, personal essays, leadership qualities, signs of creativity, and records of service and volunteerism, she pointed out. “We feel that these qualities convert into good university citizenship, which is what we want,” she said.

The non-SAT score factors were consistent with last year’s entering freshmen, Porter said. For example, the high school class rank distribution is virtually the same between the two Pitt classes, with three out of four freshmen in the top 20 percent of their high school class.

“We’ve concluded that this year’s class is equally qualified and not less well prepared than last year’s. All things considered, we think our holistic evaluation process served the University well in this case,” she said. “One thing this does show is that it’s a bad idea to hang your hat on one criterion. To use a standardized test score as a cut-off point for admission is making a mistake.”

—Peter Hart

Filed under: Feature,Volume 39 Issue 2

Leave a Reply