Xiaofan Yu

Xiaofan Yu

(she/her/hers)

University of California, San Diego

Internet of Things, Edge Computing, On-Device AI, Network Optimization

Xiaofan Yu received the B.Sc. degree from Peking University, China in 2018 and the M.Sc. degree from University of California, San Diego in 2020. She is a Ph.D. candidate at the Department of Computer Science and Engineering, University of California, San Diego, supervised by Prof. Tajana Rosing. She is also actively collaborating with mutiple faculties from UCSD and researchers from industry labs (Arm, Intel, etc.).

She has received the best paper award from CNSM'21 and the nomination for young women in VLSI from IEEE Computer Society's TCVLSI newsletter in 2021.
Her current research interests are designing online lightweight on-device machine learning algorithms for unsupervised, lifelong and federated learning tasks, in adaptation to the complex and non-stationary environments. 

Adaptive and Efficient AI at the Edge for Pervasive Internet-of-Things Deployments

Along with the rapid development of lightweight machine learning algorithms and powerful edge computing platforms (e.g., NVIDIA Jetson Nano), Artificial Intelligence (AI) at the edge is the next tide in boosting the capability of Internet-of-Things (IoT) deployments. Though promising, the research on this track needs to resolve multiple barriers from single-device to network levels. My research targets at addressing these barriers and closing the gap towards enabling intelligent and large-scale IoT deployments in the real world.

My first thrust of research targets at developing adaptive and robust AI on a single edge device. The essential challenge lies in two aspects: (1) the limited capabilities of AI in adapting and robustly learning from the complex real environments, and (2) the huge resource and energy cost to perform on-device training. For the first aspect, my work on online self-supervised lifelong learning extracts and memorizes knowledge under single-pass non-iid data streams without labels and prior knowledge (e.g., known boundaries). It was designed based on contrastive learning with self-supervised knowledge distillation to combat the absence of labels, and employed online memory update to retain diverse samples. Experiments on sequential and imbalanced CIFAR-10 streams show a 6.43% kNN accuracy improvement over the best baseline. For the second aspect, our team is actively paving the way for deploying emerging computing paradigms (i.e., the brain-inspired Hyperdimensional Computing) on edge devices as a lightweight learning method. Being hardware-friendly and robust to noises, HDC is at least 1,000x more energy efficient than the state of the art neural networks with similar accuracy.

Another thrust of my research aims at efficient distributed learning on heterogeneous IoT networks. Federated Learning (FL) appears as a privacy-preserving scheme for distributed training at the edge. The state-of-the-art FL algorithm, i.e., FedAvg, synchronously aggregates the updated models obtained from local training on edge devices. Nevertheless, the strong synchrony of FedAvg hinders convergence and robustness if deployed on wireless networks with unstable connections in real world. My work of Async-HFL proposed an end-to-end asynchronous FL framework for hierarchical IoT networks. Async-HFL performs asynchronous updates on each layer and adaptive sensor selection as well as sensor-gateway association during runtime, to enhance convergence under heterogeneous data and latency distributions. Async-HFL converges at least 8.65x (1.08x) faster in wall-clock time compared to the synchronous (asynchronous) state-of-the-art FL algorithms (with client selection).