Hosted by The University of Texas at Austin’s Good Systems’ “Designing Responsible AI Technologies to Protect Information Integrity” research team, this virtual event brings together researchers and thought leaders working across disciplines and sectors, focusing on developing AI tools for use by journalists, fact-checkers, and researchers. The event will feature three keynote talks and a panel discussion covering a wide range of topics. Viewers are encouraged to participate in interactive Q&As following each session.
This event is free and open to the public. Please register using the link below to receive the Zoom link for the event. The Zoom link will be sent the day before.
Speakers and Panelists
University of Cambridge
The University of Texas at Austin
Yale University
University of North Carolina, Chapel Hill
Schedule
The workshop will take place May 2, 2024, 9am-12pm US Central Time.
Time | Event |
---|---|
9:00am-9:10am | Welcome |
9:10am-9:45am | Invited talk 1: Andreas Vlachos "Fact-checking as a conversation" [slides] |
9:45am-10:20am | Invited talk 2: David Beaver & Jason Stanley "Fighting Disinformation: Another Noble Experiment?" [slides] |
10:20am-10:30am | Break |
10:30am-11:05am | Invited talk 3: Alice Marwick "Down the Rabbit Hole? Disinformation and Far-Right Online Radicalization" |
11:05am-12:00pm | Panel Discussion featuring invited speakers |
Speaker Bios and Abstracts
Andreas Vlachos: Fact-checking as a conversation
Abstract: Misinformation is considered one of the major challenges of our times resulting in numerous efforts against it. Fact-checking, the task of assessing whether a claim is true or false, is considered a key in reducing its impact. In the first part of this talk I will present our recent and ongoing work on automating this task using natural language processing, moving beyond simply classifying claims as true or false in the following aspects: incorporating tabular information, neurosymbolic inference, and using a search engine as a source of evidence. In the second part of this talk, I will present an alternative approach to combatting misinformation via dialogue agents, and present results on how internet users engage in constructive disagreements and problem-solving deliberation.
Bio: Vlachos is a professor of Natural Language Processing and Machine Learning at the Department of Computer Science and Technology at the University of Cambridge and a Dinesh Dhamija fellow of Fitzwilliam College. Current projects include dialogue modelling, automated fact checking and imitation learning. Vlachos has also worked on semantic parsing, natural language generation and summarization, language modelling, information extraction, active learning, clustering and biomedical text mining. His research team is supported by grants from ERC, EPSRC, ESRC, Facebook, Amazon, Google, Huawei, the Alan Turing Institute and the Isaac Newton Trust.
David Beaver & Jason Stanley: Fighting Disinformation: Another Noble Experiment?
Abstract: We discuss a web of connected assumptions underlying the contemporary focus on the anti-Democratic effects of disinformation, and the use of technology to fight it. The issues are philosophical, psychological, political, and practical. To turn to metaphor to examine our assumptions: is disinformation to be thought of as a disease that is spreading, and that can be cured by careful message sterilization that will stop its spread? Or is disinformation the result of an underlying addiction? If so, who are the addicts, and how can they be helped? Does the fight against disinformation unduly privilege truth above emotion and social connection? Are the real problems best tackled at the level of purity of individual messages, or at the level of collective well-being of social groups? What types of communicative bias can be controlled (if control is what we should be doing) by highlighting individual message purity? Here we note a wealth of literature in psychology and political science suggesting that systematically biased messaging does not require the communication of false facts at all. The implications would seem to be dire: would a world where disinformation fighting machines prevented even a single untruth from flashing in front of us be a better world to live in?
Bio: David Beaver and Jason Stanley are the authors of The Politics of Language, Princeton University Press, 2023.
Beaver is professor of linguistics and philosophy at the University of Texas at Austin and director of the UT Cognitive Science Program. His books include Presupposition and Assertion in Dynamic Semantics and Sense and Sensitivity: How Focus Determines Meaning.
Stanley is the Jacob Urowsky Professor of Philosophy at Yale University. He is the author of How Fascism Works: The Politics of Us and Them and How Propaganda Works (Princeton), among other books.
Alice Marwick: Down the Rabbit Hole? Disinformation and Far-Right Online Radicalization
Abstract: The popular image of far-right radicalization involves a young man visiting shadowy websites that “radicalize” him into extremism and, in the worst cases, cause him to commit political violence. But a half-century of research into the effects of media show that people’s beliefs rarely change from immediate exposure to media, be it television, books, or YouTube videos. This talk explores the relationship between far-right disinformation and belief in extremist ideas. I discuss what makes people susceptible to far-right, fringe, and conspiratorial disinformation, how they learn to see disinformation as “evidence,” and how we can begin to tackle this problem.
Bio: Alice E. Marwick (PhD, New York University) is the Microsoft Visiting Professor at the Center for Information Technology Policy at Princeton University, an associate professor in the Department of Communication at the University of North Carolina at Chapel Hill, and principal researcher at its Center for Information, Technology, and Public Life, which she co-founded. She researches the social, political, and cultural implications of popular social media technologies and runs the Disinformation in Context (DISC) Lab at CITAP, which uses ethnographic, qualitative, and computational methods to investigate communities in which disinformation is created and spread. Marwick’s most recent book, The Private is Political (Yale 2023), examines how the networked nature of online privacy disproportionately impacts marginalized individuals in terms of gender, race, sexuality, and socio-economic status.
Organizers
The University of Texas, Austin
The University of Texas at Austin
This event is organized by Designing Responsible AI Technologies to Curb Disinformation, a Good Systems project.