SRI Seminar Series: Saadia Gabriel
Overview
Our weekly SRI Seminar Series welcomes Saadia Gabriel, an assistant professor of computer science at the UCLA Samueli School of Engineering and an affiliated faculty member of the Bunche Center for African-American Studies. Gabriel’s research sits at the intersection of language, safety, and social impact: she investigates how to measure factuality, intent, and potential harm in human-written text, and how these insights can inform more responsible AI systems.
Moderator: Avery Slater, Department of English and Drama
Talk title: “Human in the machine: Towards community-grounded AI reasoning”
Abstract:
Large language models (LLMs) like ChatGPT are increasingly determining the course of our everyday lives. They can decide what content we are likely to see on social media. They are already being deployed in high-stakes settings, e.g. as mental health support tools. So what happens when LLMs break? What are the risks posed from LLM behavior that is misaligned from human expectations and norms? How do we mitigate the negative effects of these failures on society and prevent discriminatory decision-making?
In this talk, I discuss the growing disconnect between scalability and safety in LLMs. I walk through three recent studies that highlight the need for a community-grounded approach that bridges the gap between AI systems and the users who interact with them. First, I will describe work from the UCLA Misinformation, AI & Responsible Society (MARS) lab exploring how AI agents can change the beliefs of cognitively biased users. Next, I will discuss limitations in replicating human evaluation with AI agents. Lastly, I discuss simple and effective strategies that improve robustness of AI systems by aligning them with users’ individual and community perspectives.
Suggested reading:
- Genglin Liu, Vivian Le, Salman Rahman, Elisa Kreiss, Marzyeh Ghassemi, Saadia Gabriel, “MOSAIC: Modeling Social AI for Content Dissemination and Regulation in Multi-Agent Simulations,” arXiv pre-print, October 26, 2025.
- Salman Rahman, Sheriff Issaka, Ashima Suvarna, Genglin Liu, James Shiffer, Jaeyoung Lee, Md Rizwan Parvez, Hamid Palangi, Shi Feng, Nanyun Peng, Yejin Choi, Julian Michael, Liwei Jiang, Saadia Gabriel, “AI Debate Aids Assessment of Controversial Claims,” arXiv pre-print, October 29, 2025.
- Ashima Suvarna, Christina Chance, Karolina Naranjo, Hamid Palangi, Sophie Hao, Thomas Hartvigsen, Saadia Gabriel, “ModelCitizens: Representing Community Voices in Online Safety,” arXiv pre-print, July 9, 2025.
Speaker biography:
Saadia Gabriel is a computer scientist whose work sits at the intersection of language, safety, and social impact. She is an assistant professor at the UCLA Samueli School of Engineering and an affiliated faculty member of the Bunche Center for African-American Studies, where she studies how human-written text conveys factuality, intent, and potential harm—and how these signals can inform the development of more responsible AI systems. As the founder of UCLA’s Misinformation, AI and Responsible Society (MARS) Lab, she leads research that advances methods for understanding and mitigating the real-world consequences of language technologies.
Gabriel’s scholarship is grounded in the belief that language models shape societal outcomes in ways that reflect underlying choices—technical, cultural, and institutional. Her work examines how to design and evaluate AI systems that can better identify misleading or harmful content, support more equitable information ecosystems, and strengthen public resilience to misinformation. Trained at the University of Washington’s Paul G. Allen School and with postdoctoral appointments at MIT CSAIL and NYU, she brings a multi-disciplinary lens to questions of safety, intent, and accountability in AI. Across her roles, she is driven by a core commitment: to ensure that advances in language technology contribute to a more informed, trustworthy, and socially grounded digital world.
About the speaker
Saadia Gabriel is a computer scientist whose work sits at the intersection of language, safety, and social impact. She is an assistant professor at the UCLA Samueli School of Engineering and an affiliated faculty member of the Bunche Center for African-American Studies, where she studies how human-written text conveys factuality, intent, and potential harm—and how these signals can inform the development of more responsible AI systems. As the founder of UCLA’s Misinformation, AI and Responsible Society (MARS) Lab, she leads research that advances methods for understanding and mitigating the real-world consequences of language technologies.
Gabriel’s scholarship is grounded in the belief that language models shape societal outcomes in ways that reflect underlying choices—technical, cultural, and institutional. Her work examines how to design and evaluate AI systems that can better identify misleading or harmful content, support more equitable information ecosystems, and strengthen public resilience to misinformation. Trained at the University of Washington’s Paul G. Allen School and with postdoctoral appointments at MIT CSAIL and NYU, she brings a multi-disciplinary lens to questions of safety, intent, and accountability in AI. Across her roles, she is driven by a core commitment: to ensure that advances in language technology contribute to a more informed, trustworthy, and socially grounded digital world.
About the SRI Seminar Series
The SRI Seminar Series brings together the Schwartz Reisman community and beyond for a robust exchange of ideas that advance scholarship at the intersection of technology and society. Seminars are led by a leading or emerging scholar and feature extensive discussion.
About the Schwartz Reisman Institute for Technology and Society
The Schwartz Reisman Institute for Technology and Society is a research institute at the University of Toronto that explores the ethical and societal implications of technology. Our mission is to deepen knowledge of technologies, societies, and humanity by integrating research across traditional boundaries to build human-centred solutions.
Explore each session in advance by visiting SRI’s Events page.
Missed an event? Visit SRI’s YouTube channel to watch previous seminars.
Good to know
Highlights
- 1 hour 30 minutes
- Online
Location
Online event
Organized by
Schwartz Reisman Institute
Followers
--
Events
--
Hosting
--