TEACHING AT PITT: Teaching and learning in the age of generative AI

By JOHN G. RADZILOWICZ

Generative artificial intelligence (AI) broadly refers to systems that can create diverse types of original content, including text, images and code, by extrapolating from the patterns learned from being pre-trained on copious amounts of data. The first generative AI tools to make a big splash were text-to-image generators such as Dall-E and Midjourney, which had everyone trying out their newly super-enhanced artistic skills during the peak of the pandemic.

Last fall, OpenAI released ChatGPT, a generative pre-trained transformer large language model-based chatbot. It lets users generate and refine text toward desired content, length, format, style and level of detail. In a short time, competing products became ubiquitous. Since ChatGPT’s release, universities have grappled with how to respond to these new AI tools into higher education.

Reactions were best described as a bit hyperbolic. Headlines across all forms of media declared an “existential threat” to a college education. Scholars predicted everything from the “death of the college essay” to “uncontrolled cheating and plagiarism.” Calls rang out for the all-out banning of generative AI in the classroom and for the rapid development of AI detection tools to catch students who used AI to commit academic integrity violations.

The initial focus on academic integrity violations obscured the potential benefits to teaching and learning that could come with proper risk management around these new tools. And that is what generative AI is — a new tool. While universities have sometimes been resistant to adopting and adapting to new tools, if history has taught us anything about new technology, it is that ignoring it, suppressing it and banning it are simply not successful strategies. As in the rest of life, adaptation is what leads to flourishing in both teaching and learning. So, adapt we must.

Generative AI is skilled at generating human-like writing using pattern recognition, mimicry and probability, and it is getting better all the time. Pattern recognition and mimicry are foundational blocks in a variety of forms of intelligence (intellectual, social, emotional, etc.), but foundations are beginnings, not endings. Creativity, critical thinking and interpretation must still happen at the human level. So, instead of fearing generative AI, the Teaching Center encourages faculty and students to embrace it, manage it and harness it — like any other valuable tool — to enhance our collective work in teaching and learning.

The Russell Group, a UK-based university collaborative, recently published a new set of principle on AI created “to help universities ensure students and staff are ‘AI literate’ so they can capitalize on the opportunities technological breakthroughs provide for teaching and learning.” These principles recognize that mastering the use of generative AI is going to be essential for ourselves and, most importantly, for our students. Meeting our collective responsibility to those we teach means learning how to best integrate generative AI into our work and theirs.

These principles recognize both the risks and opportunities of generative AI and represent a commitment to helping staff and students become leaders in an increasingly AI-enabled world. With some minor modifications, the guiding principles are shared below:

  • Universities will support students and staff to become AI-literate.

  • Faculty and staff should be equipped to support students to use generative AI tools effectively and appropriately in their learning experience.

  • Universities will adapt teaching and assessment to incorporate the ethical use of generative AI and strive for equity in access and application of these tools.

  • Universities will ensure academic excellence, integrity and ethical conduct is upheld.

  • Universities will work collaboratively to share best practice as the technology and its application in education evolves.

How can we apply these principles in the classroom? A sample of suggestions below shows AI uses, roles, and pedagogical benefits and risks identified in research literature (adapted from (Mollick, E. & Mollick L., 2023):

AI use Role Pedagogical benefit Pedagogical risk
Mentor Provide feedback Frequent feedback improves learning outcomes. Feedback may contain errors
Tutor Direct instruction Personalized direct instruction Uneven knowledge base/confabulation
Coach Prompt metacognition Opportunities for reflection and regulation Risks of incorrect advice
Teammate Increase performance Provide alternate viewpoints Confabulation and errors
Student Receive explanations Improved comprehension Confabulation and argumentation
Simulator Deliberate practice Practicing applying knowledge aids transfer Inappropriate fidelity
Tool Accomplish tasks Improves the efficiency of task completion Outsourcing thinking rather than work

For additional strategies and to explore suggestions for teaching strategies that support academic integrity and AI use, see “ChatGPT in the Classroom” and the Teaching Center’s ChatGPT Resources for Faculty site.

It is important to note that the Teaching Center does not endorse or support the use of any AI detection tools, including Turnitin’s tool, due to high false positive rates, as well as potential equity issues caused by a higher percentages of false positives being attributed to non-native English speakers (Liang et al., 2023).

At the Teaching Center, we acknowledge both the potential benefits and the challenges of utilizing generative AI technologies to enhance our academic work, and especially to support our teaching and learning, across the entire Pitt community. As we explore the applications of generative AI to improve the quality of teaching, we also recognize it is imperative to uphold the principles of academic integrity and ethical conduct. We understand that each instructor will approach generative AI in their classroom according to their own knowledge, skill level and comfort with this innovative technology.

We encourage all faculty to make use of the Teaching Center Resource Page for Generative AI, and consider participating in future AI related workshops and events as you navigate this new terrain. In this way, we address both the risks and opportunities of generative AI and make a commitment to helping faculty, staff, and students become leaders in an increasingly AI-enabled world.

John G. Radzilowicz is interim director of teaching support at the University Center for Teaching and Learning, and co-chair of the ad hoc Committee on Generative AI for Research and Education under the auspices of the Office of the Provost and the Office of the Senior Vice Chancellor for Research.

REFERENCES

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased again non-native English writers. Patterns, 4(7). https://doi.org/10.1016/j.patter.2023.100779

Mollick, E. & Mollick, L. (2023). Assigning AI: Seven approaches for students, with prompts. Social Science Research Network. https://doi.org/10.48550/arXiv.2306.10052