Weiyang Liu
University of Cambridge
Max Planck Institute for Intelligent Systems
I am currently conducting research at Cambridge and MPI Tübingen with Adrian Weller and Bernhard Schölkopf. As a member of the advising team at MeshCapade, I also work closely with Michael J. Black. Previously, I spent wonderful years at Georgia Tech. I have also spent time at Google Brain, Nvidia Research, and MERL.
I work on principled modeling of inductive bias in machine learning. My research seeks to understand how inductive bias determines generalization, and to develop "light-yet-sweet" generalizable models: (i) light: conceptually simple in methodology and easy to implement in practice, (ii) sweet: having clear intuitions and non-trivial theoretical guarantees.
Over the years, I always find myself fascinated by geometric invariance, symmetry, structures (graph, causality) and how they can benefit generalization as a guiding principle. More recently, I become very passionate about foundation models (how to simulate human-level intelligence) and 3D/4D generative modeling (how to recreate and simulate the physical world).
I always believe in two principles in my research: (i) insight must precede application, and (ii) everything should be made as simple as possible, but not simpler. I try to follow certain research values.
I am on the academic job market. Feel free to reach out if there is a good fit!
Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision
Zhiqing Sun*, Longhui Yu*, Yikang Shen, Weiyang Liu, Yiming Yang, Sean Welleck, Chuang Gan
Preprint 2024
GraphDreamer: Compositional 3D Scene Synthesis from Scene Graphs
Gege Gao, Weiyang Liu*, Anpei Chen, Andreas Geiger, Bernhard Schölkopf
CVPR 2024
Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization
Weiyang Liu*, Zeju Qiu*, Yao Feng**, Yuliang Xiu**, Yuxuan Xue**, Longhui Yu**, Haiwen Feng, Zhen Liu, Juyeon Heo, Songyou Peng, Yandong Wen, Michael J. Black, Adrian Weller, Bernhard Schölkopf
ICLR 2024
arXiv | code | project | openreview | bib
Ghost on the Shell: An Expressive Representation of General 3D Shapes
Zhen Liu, Yao Feng*, Yuliang Xiu*, Weiyang Liu*, Liam Paull, Michael J. Black, Bernhard Schölkopf
ICLR 2024 Oral
arXiv | code | project | openreview | bib
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Longhui Yu, Weisen Jiang, Han Shi, J. Yu, Z. Liu, Yu Zhang, James Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu*
ICLR 2024 Spotlight
arXiv | code | project | huggingface | openreview | bib
I take great pleasure to (co-)mentor a few talented and highly motivated students. Mentoring and working with junior students is truely a privilege, and I always learn from and get inspired by them. I am fortunate to work with (time-wise order):
- Zeju Qiu (2024 - now)
- Ph.D. student at MPI for Intelligent Systems
- Tim Z. Xiao (2024 - now)
- Ph.D. student at University of Tübingen
- Gege Gao (2023 - now)
- Ph.D. student at ETH Zürich & University of Tübingen
Former mentees (nothing is more rewarding than seeing my mentees succeed):
- Zeju Qiu (2022 - 2024): master thesis student