Navigating Cultural Adaptation of LLMs: Knowledge, Context, and Consistency: Dr Vered Shwartz (University of British Columbia)
An Institute for Data Science and Artificial Intelligence seminar | |
---|---|
Date | 12 December 2024 |
Time | 16:00 to 17:00 |
Place | Zoom |
Event details
Speaker: Dr Vered Shwartz (University of British Columbia)
Abstract: Despite their amazing success, large language models and vision and language models suffer from several limitations. This talk focuses on one of these limitations: the models’ narrow Western, North American, or even US-centric lens, as a result of training on web text and images primarily from US-based users. As a result, users from diverse cultures that are interacting with these tools may feel misunderstood and experience them as less useful. Worse still, when such models are used in applications that make decisions about people’s lives, lack of cultural awareness may lead to models perpetuating stereotypes and reinforcing societal inequalities. In this talk, I will present a line of work from our lab aimed at quantifying and mitigating this bias.
Speaker's short bio: Vered Shwartz is an Assistant Professor of Computer Science at the University of British Columbia, and a CIFAR AI Chair at the Vector Institute. Her research interests include commonsense reasoning, computational semantics and pragmatics, multimodal models, and cultural considerations in NLP. Previously, Vered was a postdoctoral researcher at the Allen Institute for AI (AI2) and the University of Washington, and received her PhD in Computer Science from Bar-Ilan University.
Zoom meeting link:
https://Universityofexeter.zoom.us/j/93707609239?pwd=ErfOgIy30fwkAH7V5iFFVgA0EC86QU.1
(Meeting ID: 937 0760 9239 Password: 259613).