Weiyang Liu
University of Cambridge
Max Planck Institute for Intelligent Systems
I conduct research at Cambridge and MPI Tübingen with Adrian Weller and Bernhard Schölkopf. Previously, I spent wonderful years at Georgia Tech. I have also spent time at Google Brain, Nvidia, and MERL.
I work on principled modeling of inductive bias in machine learning. My research seeks to understand how inductive bias determines generalization, and to develop "light-yet-sweet" generalizable models: (i) light: conceptually simple in methodology and easy to implement in practice, (ii) sweet: having clear intuitions and non-trivial theoretical guarantees.
Over the years, I always find myself fascinated by geometric invariance, symmetry, structures (graph, causality) and how they can benefit generalization as a guiding principle. More recently, I become very passionate about foundation models (how to simulate human-level intelligence) and 3D/4D generative modeling (how to recreate and simulate the physical world).
I always believe in two principles in my research: (i) insight must precede application, and (ii) everything should be made as simple as possible, but not simpler. I try to follow certain research values.
I am on the academic job market this upcoming year. Feel free to reach out if there is a good fit!
I take great pleasure to (co-)mentor a few talented and highly motivated students. Mentoring and working with junior students is truely a privilege, and I always learn from and get inspired by them. I am fortunate to work with (alphabetical order):
- Zhen Liu (PhD student at University of Montreal)
- Zeju Qiu (Master student at Technical University of Munich)
- Longhui Yu (Master student at Peking University)
GraphDreamer: Compositional 3D Scene Synthesis from Scene Graphs
Gege Gao, Weiyang Liu*, Anpei Chen, Andreas Geiger, Bernhard Schölkopf
Preprint 2023
Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization
Weiyang Liu*, Zeju Qiu*, Yao Feng**, Yuliang Xiu**, Yuxuan Xue**, Longhui Yu**, Haiwen Feng, Zhen Liu, Juyeon Heo, Songyou Peng, Yandong Wen, Michael J. Black, Adrian Weller, Bernhard Schölkopf
Preprint 2023
Ghost on the Shell: An Expressive Representation of General 3D Shapes
Zhen Liu, Yao Feng, Yuliang Xiu, Weiyang Liu*, Liam Paull, Michael J. Black, Bernhard Schölkopf
Preprint 2023
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
Preprint 2023
arXiv | code | project | huggingface | bib