Sharon Levy

Sharon Levy

(she/her/hers)

University of California, Santa Barbara

Natural Language Processing, Responsible AI, Natural Language Generation

Sharon is a 5th-year Ph.D. candidate at the University of California, Santa Barbara, where she is advised by William Wang. Her research interests lie in natural language processing, with a focus on Responsible AI. Sharon's research has been published in ACL, EMNLP, WWW, and LREC. She has spent summers interning at AWS AI, Meta, and Pinterest, and is a recipient of the Amazon Alexa AI Fellowship. Throughout her Ph.D., she has mentored several undergraduate students and was awarded the CS Outstanding Teaching Assistant award.

Responsible AI via Responsible Large Language Models

While large language models have advanced the state-of-the-art in natural language processing, these models are trained on large-scale datasets, which include harmful information. Studies have shown that as a result, the models exhibit social biases and generate misinformation after training. My research focuses on the study of such risks through analyzing and interpreting large language models across various aspects of Responsible AI. In particular, I study fairness, trustworthiness, and safety within large language models. Within the space of fairness, I have analyzed dialect bias in generated text when prompted with African American Vernacular English (AAVE) and Standard American English (SAE). I have investigated the trustworthiness of models through the memorization and subsequent generation of misinformation in the context of conspiracy theories. Finally, my research in AI safety aims to identify generated text that may lead to physical harm. Overall, the research I am pursuing develops more principled methods for the discovery of harmful behavior in NLP models in order to utilize these models more safely and effectively in the real world.