Weiyang Liu
University of Cambridge
Max Planck Institute for Intelligent Systems
I conduct research at Cambridge and MPI Tübingen with Adrian Weller and Bernhard Schölkopf. Previously, I spent wonderful years at Georgia Tech. I have also spent time at Google Brain, Nvidia, and MERL.
I work on principled modeling of inductive bias in machine learning. My research seeks to understand how inductive bias determines generalization, and to develop "light-yet-sweet" generalizable models: (i) light: conceptually simple in methodology and easy to implement in practice, (ii) sweet: having clear intuitions and non-trivial theoretical guarantees.
Over the years, I always find myself fascinated by geometric invariance, symmetry, structures (graph, causality) and how they can benefit generalization. More recently, I become very passionate about foundation models (how to simulate human-level intelligence) and 3D/4D generative modeling (how to recreate and simulate the physical world).
I always believe in two principles in my research: (i) insight must precede application, and (ii) everything should be made as simple as possible, but not simpler. I try to follow certain research values.
I am on the academic job market this upcoming year. Feel free to reach out if there is a good fit!
I take great pleasure to (co-)mentor a few talented and highly motivated students. Mentoring and working with junior students is truely a privilege, and I always learn from and get inspired by them. I am fortunate to work with (alphabetical order):
- Zhen Liu (PhD student at University of Montreal)
- Zeju Qiu (Master student at Technical University of Munich)
- Longhui Yu (Master student at Peking University)
Controlling Text-to-Image Diffusion by Orthogonal Finetuning
Zeju Qiu*, Weiyang Liu*, Haiwen Feng, Yuxuan Xue, Yao Feng, Zhen Liu, Dan Zhang, Adrian Weller, Bernhard Schölkopf
NeurIPS 2023