Andreea Bobu

 Andreea Bobu

(she/her/hers)

University of California Berkeley

human-robot interaction, robot learning, human-centered representation learning, learning from human input

Andreea Bobu is a Ph.D. candidate at the University of California Berkeley in the Electrical Engineering and Computer Science Department advised by Professor Anca Dragan. Her research focuses on aligning robot and human representations for more seamless interaction between them. In particular, Andreea studies how robots can learn more efficiently from human feedback by explicitly focusing on learning good intermediate human-guided representations before using them for task learning. Prior to her Ph.D. she earned her Bachelor's degree in Computer Science and Engineering from MIT in 2017. She is the recipient of the Apple AI/ML Ph.D. fellowship, is an R:SS and HRI Pioneer, has won best paper award at HRI 2020, and has worked at NVIDIA Research.

Aligning Robot Representations with Humans

Robots deployed in the real world will interact with many different humans to perform many different tasks in their lifetime, which makes it difficult (perhaps even impossible) for designers to specify all the aspects that might matter ahead of time. Instead, robots can extract these aspects implicitly when they learn to perform new tasks from their users' input. The challenge is that this often results in representations which pick up on spurious correlations in the data and fail to capture the human's representation of what matters for the task, resulting in behaviors that do not generalize to new scenarios. Consequently, the representation, or abstraction, of the tasks the human hopes for the robot to perform may be misaligned with what the robot knows. In my work, I explore ways in which robots can align their representations with those of the humans they interact with so that they can more effectively learn from their input. My core idea is to divide and conquer the robot learning problem: explicitly focus human input on teaching robots good representations before using them for learning downstream tasks. In my thesis, I accomplish this by investigating how robots can reason about the uncertainty in their current representation, explicitly query humans for representation-specific feedback to improve it, then use task-specific input to learn behaviors on top of the new representation.