Abhilasha Ravichander

 Abhilasha Ravichander

Carnegie Mellon University

Natural Language Processing, Natural Language Understanding, Model Analysis and Evaluation, Robustness,  Interpretability

Abhilasha is a Ph.D. candidate at the Language Technologies Institute, Carnegie Mellon University. Her research focuses on understanding neural model performance, with the goal of facilitating more robust and trustworthy NLP technologies. In the past, she interned at the Allen Institute for AI and Microsoft Research, where she worked on understanding how deep learning models process challenging semantic phenomena in natural language. Her work received the "Area Chair Favorite Paper" award at COLING 2018, and she was selected as a "Rising Star in Data Science" by the University of Chicago Rising Stars workshop.  She also served as co-chair of the socio-cultural inclusion committee for NAACL 2022, and co-organizes the ‚ÄòNLP WIth Friends' seminar series.

Developing User-Centric Models for Question Answering

Everyday users now benefit from powerful Question-Answering(QA) systems in a range of consumer-facing applications. Voice assistants such as Amazon Alexa or Google Home have brought these technologies into several million homes globally. Yet, even with millions of users now interacting with them on a daily basis, there has been surprisingly little research attention devoted to studying the issues that arise when people use QA systems. Traditional QA evaluations do not reflect the needs of many users who can benefit from QA technologies. For example, users with a range of visual and motor impairments prefer the option to interact with voice interfaces for efficient text entry. Keeping these needs in mind, we construct evaluations considering the interfaces through which users interact with QA systems. We analyze and mitigate errors introduced by three interface types that could be connected to a QA engine: speech recognizers converting spoken queries to text, keyboards used to type queries into the system, and translation systems processing queries in other languages. Our experiments and insights aim to present a useful starting point for both practitioners and researchers, to develop usable question-answering systems.