By MARTY LEVINE
The response by potential research partners to the new Collaboratory Against Hate has been “overwhelming,” says co-director Kathleen Blee, with more than 300 stepping forward as a potential part of the team.
The month-old joint Pitt-Carnegie Mellon University project, dubbed a “Research and Action Center,” is looking not just to study the causes of extremist hate in the world and online, as well as the vectors for its spread, but to devise interventions.
“We’re still in our infancy but we’ve had a pretty fast launch here,” says Blee, dean of the Dietrich School of Arts & Sciences and sociology faculty member. The center is intent on bringing together scientists from diverse fields with community partners — children’s advocates, government agencies, non-profits — “to look at the issues of extremist hate more broadly.”
The white supremacist movement, for instance, “has a much greater reach than it had 10 years ago, through social media,” she notes, coming to children and young adolescents unbidden through certain online games, for instance.
“We know very little about children who are targeted,” she adds. “That’s a kind of knotty problem that we’re interested in because it involves an unusual team of experts” required to understand and counteract it. They might include researchers on child and adolescent development, on addiction (since getting sucked in to radical ideas mimics other addictive behaviors, she says) and experts on law and the online community.
“That’s the kind of problem we’re interested in that isn’t being addressed by any other research environment,” Blee says.
Her CMU counterpart, Collaboratory co-director Lorrie Cranor (a professor in security and privacy technologies in CMU’s CyLab, which she also directs), says their plan is to bring together researchers in clusters to develop bigger proposals, using Collaboratory seed money to draw in other funding for larger-scale efforts.
These research clusters might include everyone from linguists, gun violence researchers and political scientists to religious studies specialists and historians — an array of experts that is surprising even to her, Cranor says. “Sadly, we can’t commit to solving all the problems of the world, but I think the understanding of the root causes (of hate) is something we need to be looking at,” she said. That also includes examining one of the problems of our time — much exacerbated by social media — that so many people believe “facts” that seem ridiculous even on their face.
“Technology does play a big role” in the spread of fake facts and hateful ideologies, Cranor says, and CMU researchers have already been studying extremist-group formation and how their ideas propagate through a network. “Having that understanding is the first step to understanding how we can stop that provocation,” she says.
At Pitt, School of Computing and Information faculty member Yu-Ru Lin has already joined the Collaboratory’s work. She is part of the Collaboratory’s steering committee, which is helping to build early research teams.
Lin uses a statistical and computational approach to study publicly available texts from such online sources as Twitter, Facebook and YouTube to understand political discourse – how online groups discuss contentious issues, such as gun control and abortion, and how hate speech and prejudice enter and evolve.
Concentrating on 2016-18, her research looks at “how do societal trends emerge from online discussions and what is the best way to ameliorate these group conflicts,” she explains. How do certain communities form their views on political issues? How do people respond to crisis events that trigger intense political talk, such as mass shootings, and how do people express their emotions?
Right after a major shooting, she says, she is able to chart how different groups “say what they believe is a cause, consequence and solution to a shooting.” Her research can chart how some groups, “right after the shooting, were able to take others’ perspectives and they were able to look at middle ground solutions,” such as legislation.
She is using artificial intelligence — machine learning — to look at “language cues that will help researchers interpret the language contexts … and political beliefs,” with the intention of designing “an intervention against hate” — a tool for researchers to use as a kind of behavior coach within the group.
Of course, she cautions, researchers must be certain that such an AI-assisted tool would always have a positive impact. “We don’t want people to think we are creating lots of bots to influence people.” After all, such an effort by the Russians and others, prior to the 2016 election, helped get us in this current mess in the first place
Adds Blee: the field of extremist views “is so enormous that we need more suitable interventions. Right now we’re at the building stage. The time is right to do this. There is a lot of public interest in changing things — there is public will to enhance the interventions. The interventions are just not fully worked out.
“Part of the optimism of the center,” she says, “is that these are not inevitable patterns. Hate and extremism are socially created patterns. We can reverse them. We are not inevitably caught in an historical problem that we cannot control.”
Marty Levine is a staff writer for the University Times. Reach him at firstname.lastname@example.org or 412-758-4859.
Have a story idea or news to share? Share it with the University Times.