Senate Plenary tackles good, bad and middle ground of ChatGPT, generative AI

People on panel

By SHANNON O. WELLS

When Joseph Yun, Pitt’s artificial intelligence and innovation architect, searched online for the source of words and phrases the latest version of ChatGPT draws from, he found the results of an internet-based “common crawl,” well, rather eye opening.

“What is common crawl? ‘This (ChatGPT) corpus contains petabytes of data collected over eight years of web crawling. This corpus contains raw web page data, metadata, metadata extracts and text extracts with light filtering,’” he read from the product description. “You know what that means, in computer science? No filtering,” Yun said to a roar of audience laughter. “So, it’s just eight years of the publicly scrapable web.”

SCIENCE REVEALED

"ChatGPT Wrote this Title: Exploring the Impact of AI on Our Minds and Society"
7 p.m. April 19, online

A few months ago, a seemingly revolutionary technological milestone occurred when ChatGPT was released online. ChatGPT and other instances of what are called "generative artificial intelligence" are programs that actually generate text, images, audio or other content for users. There has been much debate about how these programs may pose a threat or an opportunity for our education system. This Science Revealed lecture from the Dietrich School of Arts & Sciences will present the research-informed ideas of several Pitt experts on the broader topics of how this technology may affect the way we think, write and speak, and the associated impacts on society. RSVP by April 17 to Jason Irwin with your name and e-mail address to get login instructions. Information about the panelists can be found at as.pitt.edu/sciencerevealed.

“OK, it gets worse,” he continued, describing how OpenWebText2, a similar generative artificial intelligence-based program, draws its wisdom from Reddit posts that received at least three upvotes. “So, if you got three or more upvotes, you’re in (Chat) GPT3. Just imagine the toxicity, the … just horrible content that exists on Reddit … because the world is a crazy place, and all of that has been smashed into vector space as (generative AI software content), and now it’s just math.”

However, “most people don’t understand that this is how it works,” he added. “We’re just kind of like, ‘Oh, it’s just this brain.’ No, it’s this body of words, in which basically an advanced form of linear algebra is (now placed) on top of it. So, words matter. Will we choose our words wisely?”

Yun was among the University panelists who shared insights on the theme, “Unsettled: Frames for Examining Generative Artificial Intelligence” at Pitt’s 2023 Senate Plenary held April 4 in the William Pitt Union Assembly Room.

In addition to Yun, the panel included Colin Allen, distinguished professor of philosophy of science; Morgan Frank, assistant professor, School of Computing and Information; Na-Rae Han, linguistics teaching professor; Alison Langmead: clinical professor, History of Art & Architecture and Visual Media Workshop director; Michael Madison, professor of law; and Annette Vee, associate professor of English and director of Pitt’s Composition Program. Senate Council Vice President Kris Kanthak served as host and moderator.

An annual event open to the entire Pitt community, the plenary drew about 30 to the Assembly Room, including Chancellor Patrick Gallagher and Provost Ann Cudd. Another 57 participated remotely as a webinar.

Very fancy horse and carriage

Colin Allen, who works on philosophy of cognitive science, especially issues related to animal cognition and artificial intelligence, in the Department of History and Philosophy admitted that, while generative AI was disruptive, “we’ve had many disruptive technologies in the past. The automobile has led to 30 percent of our urban area being paved, which is a sort of radical shift in the way that we live our lives in cities at least, and of course, the distances that we can travel in a relatively short amount of time.”

The hype around AI technologies like ChatGPT, he said, ranges from, “‘We’re on the brink of human-level intelligence or even superhuman intelligence,’ to the naysayers countering the hype, saying, ‘Oh, it’s just a very fancy kind of autocomplete. …’

“I want to take a position somewhere in the middle of that. It’s not just autocomplete (predictive texting), or at least it’s autocomplete like the automobile is just a very fancy horse and carriage. It has a lot more capabilities. And we should be a little bit clearer about the technologies that we’re talking about here and what their capacities are.”

Generative AI refers to a particular class of AI models that Allen explained are “typically implemented in artificial neural networks that do pattern completion. So you can give them some input that’s partial … called a prompt, (that) could be something that’s related to words. It could be a partial image, or it could be even a description of an image, and it will complete that task in a way that generates even more text or an appropriate image.”

This differs from other AI applications whose main function is classifying, such as in medical contexts where an artificial neural network can take images “from radiology, for instance, and tell you whether or not — or at least give you its best guess — there was a cancerous tumor in that image. So these are based on somewhat similar technologies.”

With ChatGPT, whose last three letters stand for “generative pre-trained transformer,” Allen said the basic idea is rather than just doing a simple “pattern completion” of everything that’s been put in so far to predict what comes next, “it’s actually looking at some temporal or distal structure in that input.

“So it’s focusing on specific parts of that input in order to predict what the next thing is that it’s going to produce,” he noted.

While these capabilities worry educators, and Allen doesn’t “want to sort of downplay” generative AI’s accomplishment, “it has some real limitations that are exposed double if you know how to play with it in such ways as to produce the kinds of errors that are systematic of the limitations of this approach.”

Despite all the hype, “we’re going to have no real functional AI that can do anything that humans can do within the next few years,” he said, adding there are certain things GPT-4, the fourth iteration of the ChatGPT model, “still can’t do, and I actually think that there are sort of foundational reasons in the technology that will put certain kinds of limits on it.”

Allen embraced ChatGPT on the first day of his undergraduate class this semester. Presenting his students with an argument he composed alongside its generative AI counterpart, he told the class, “Let’s look at the differences between them and compare and contrast. Let’s actually take a critical approach to this. Let’s not just be wowed by what a nice job it actually did on the output it gave. Let’s compare it to what we would ideally want for an A or an A-plus answer rather than the sort of average B, B-minus answer that it gives on the particular prompt that I gave it.”

“I think we can incorporate this into our teaching modes quite readily,” he added, “if we train what we’re supposed to be training in a university such as this, which is a critical attitude towards things.”

Living in a meat sack

Alison Langmead began her presentation with the novel suggestion that everyone listening “make a commitment today to stop using the phrase ‘artificial intelligence, or AI,’ in our speech and in our writing”

Humans, she elaborated, are “suggestible animals, and our use of language shapes our minds. And in the words of a former teacher of mine, … the phrase artificial intelligence can be considered a form of dangerous nonsense.” Longmead said she means this not “as a truth claim, but as a move toward framing this conversation in terms of a socio-technical understanding” the term artificial intelligence has “on our humanity.”

“All we know of what it means to be intelligent comes from living in this meat sack in empathetic community with the other meat sacks who respond as like,” Langmead said. “One cannot make an artificial version of our full-blooded intelligence as we experience it. The intelligence we can know and describe necessarily comes attached to this particular instantiated bodily engagement in the world. So rather than (artificial intelligence), what we’re talking about here would be creating another intelligence.”

Rather than machines, she said, “it will be on us … who we choose to accept as an ... intelligence within our global society. It is on us to allow a silicon-based agent to have the power to make fully independent decisions about our lives. It is in our power to do it now, if we like.”

The dance of opportunity

Morgan Frank’s role in the School of Computing and Information puts him at the forefront of researching the complexity of AI, as well as the future of “work” and the socio-economic consequences of technological change. At the plenary, he said his research delves into whether exposure to technology and its impact on labor can be measured, and what are some possible outcomes or symptoms “if technology exposure is at play?”

“I’m not alone in researching this. There’s now a really large collection of literature trying to estimate which workers and occupations are most exposed to technology,” he said, citing a 2013 Oxford University-based study providing a probability of computerization for the 700 job titles for which the U.S. Bureau of Labor Statistics reports statistics. It concluded “alarmingly,” that 47 percent of U.S. employment is at higher risk of computerization.

“Now, this was an early heavy hitter in this body of research,” Frank said. “There have since been many studies that do something similar, and although there are differences among the studies, they’re all motivated by this core economic theory called skill-biased technological change. But part of the assumption of this framework shared among all these studies is that creativity is not automatable.”

Frank explained that the concept of “skill-based technological change” suggests that “high-skilled cognitive workers are made more productive by cognitive technologies.”

Therefore, occupations such as software developer and university professor are more secure because of machine learning. Low-skilled and physical laborers, however, are more likely to be substituted and lose wage or employment opportunities because of robotics and related technologies.

“So a truck driver is not facing a rosy future, as autonomous vehicles become more mature,” Frank said, showing a video of a Boston Dynamics robot carrying and stacking boxes in a warehouse. “Now imagine that this technology becomes a practical solution for warehouse settings. What do you think this would mean for employment in warehouses?”

Another video juxtaposed clearly skilled dancers with unskilled yet enthusiastic ones. “Here, we have an artisanal good produced by a high-skilled worker. Here, we have two unskilled workers,” he said to audience laughter. “But using some technology, we can produce dance, art, this artisanal good, using these unskilled workers. And unlike in the warehouse setting, imagine this became a practical way to produce dance. I think it would depress wages and employment for the expensive, high-skilled ballerina, but create new employment and new wage opportunities for the low-skilled workers who can adapt to work with the technology.

“This is the situation we’re in now with generative AI tools,” he added. “Which workers can adapt and work with the technology?”

While the worst-possible scenario, that generative AI technology “automates” workers, leading to unemployment, Frank said, “more likely, technology reshapes the skills that are required by existing occupations. Workers who can adapt will work with those technologies and move forward happily. Workers who cannot might (end up in) job separations because they either quit or were fired because of their failure to adapt.”

What is writing for?

As director of the composition program, Annette Vee and her colleagues serve more than 7,000 undergraduate students a year, along with others in the Writing Institute and Writing Center that support wordsmithing across campus. Composing and writing are taught as creative disciplines based on generating — and explaining — ideas through investigative, critical inquiry, she said, and exploring one’s “experiences, thoughts and observations.”

Making “productive use of uncertainty” runs counter to what large language models such as ChatGPT represent and even what the technology says explicitly about the value of writing, “which is to explain, to argue, to persuade,” Vee said. “ChatGPT is infamously never uncertain. It responds with confidence if it’s right and even if it’s obviously tragically wrong.

“More importantly, it has no relationship to what it means to be uncertain, to inquire, to examine its own experiences,” she added. “It has no stakes in what it writes.”

Vee warned against a future possibility that Pitt administration would call for eliminating the Writing Center, “because ChatGPT irons out student prose just fine, when a budget model suggests that we could save thousands by raising course caps and eliminating sections of first-year writing because students don’t really need to learn how to write anymore,” she said. “We must resist that.”

Posing the question “What is writing for?” Vee said she’s hopeful the powers that be “will remember it’s the faculty in the community and human engagement that keeps students here and keeps us here as faculty as well.”

Acknowledging that faculty need to integrate generative AI into their writing curriculum, “we can still do so with productive uncertainty,” she said. “So, what is writing for? GPT answers that it’s for informing, persuading, expressing, reporting or entertaining. It doesn’t say that it’s for learning or inquiry or growth or belonging or productive uncertainty or the pleasure of wrestling with difficult ideas. Large language models like ChatGPT will not produce challenging, thoughtful, innovative humans that Pitt faculty help to nurture now — our students.”

Senate Council President Robin Kear, who helped plan and promote the plenary and participated remotely, said she thought the event “went very well.”

“The speakers complemented each other’s expertise. It was a great mix of disciplinary viewpoints. Alison Langmead asked us to rethink the words that we use around this technology, which was an unexpected take,” she said. “Joe Yun’s description of the technology deepened my understanding. Each viewpoint brought something new and unexpected, and it was somehow all reassuring that we can be prepared, if we act thoughtfully, for the changes generative AI will bring.”

Shannon O. Wells is a writer for the University Times. Reach him at shannonw@pitt.edu.

 

Have a story idea or news to share? Share it with the University Times.

Follow the University Times on Twitter and Facebook.