Innovation in Education: Making grading for Python coding easier

By MARTY LEVINE

Grading essays may be tough, but assessing a single batch of 40 to 60 Python coding assignments can take an individual teaching assistant two weeks, Dmitriy Babichenko, School of Computing and Information faculty member, has found in his own data analytics classes.

That restricts him to using only four assignments per semester, in which students are asked to write code and briefly explain their methods. But students would benefit from more frequent, smaller coding assignments — and from getting their feedback and grades more quickly.

Babichenko’s proposal to transform this situation — not just for his class, but for the nearly 60 classes at Pitt that teach Python coding across many disciplines — is among seven that won a 2023 Innovation in Education Award from the Provost’s Advisory Council on Instructional Excellence.

His project, now underway, will start by collecting assignments from those many courses throughout Pitt that use the Python programming language. Python is so common now that it appears in everything from political science and digital humanities to library science offerings. Talking to other faculty prior to submitting his proposal, Babichenko found that easing the assignment assessment process was “sorely needed.” The project team’s first step is to create assignment templates that faculty in many disciplines can use. Step two will be to devise automated grading methods for use across these many disciplines as well, employing artificial intelligence.

This project includes three faculty partners — Jacob Biehl from computing; Ravi Patel from the School of Pharmacy; and Na-Rae Han from linguistics in the Dietrich School of Arts & Sciences.

“It’s difficult to find a department at Pitt that doesn’t teach a type of Python,” Babichenko says. Python has become widespread here because more types of jobs require coding capabilities.

The project will be pilot-tested in Babichenko’s own introduction to data analytics course, which has 40 to 60 students every semester. If he were to add new assignments or even multiple iterations of one assignment, as students wish, the class would need an army of teaching assistants, he says. Instead, “we want to try to automate this process to a degree,” keeping the homework meaningful for students while making it less burdensome to instructors.

AI can assess the quality of the code and how it could be improved, and it can, to a degree, assess students’ brief explanations of their methods. In fact there are already libraries of industry-standard tools for Python assessment, Babichenko says. AI also can easily assess the most basic assignment fulfillment — does the code run and produce the desired result?

Once enough assignments are collected this summer from faculty teaching Python, the project this fall will employ Ph.D. and undergraduate students to create the assignment templates and make sure that the whole process is compatible with Canvas, as well as with an even more common platform where programs are developed, revised and stored — GitHub (which is also used by many workplaces). Once the project is complete, instructors in any discipline can employee these templates, and the new assessment tools, for immediate assignment evaluation and student feedback.

Babichenko plans for the pilot test to run in summer 2024, followed by TAs checking up on the AI assessments and students being surveyed as to its success. “If you can use Canvas,” he says, “you should be able to use this tool.”

Marty Levine is a staff writer for the University Times. Reach him at martyl@pitt.edu or 412-758-4859.

 

Have a story idea or news to share? Share it with the University Times.

Follow the University Times on Twitter and Facebook.