Weiyang Liu
University of Cambridge
Max Planck Institute for Intelligent Systems
I am currently conducting research at Cambridge and MPI Tübingen with Adrian Weller and Bernhard Schölkopf. As a member of the advising team at MeshCapade, I also work closely with Michael J. Black. Previously, I spent wonderful years at Georgia Tech. I have also spent time at Google Brain, Nvidia Research, and MERL.
I work on principled modeling of inductive bias in machine learning. My research seeks to understand how inductive bias determines generalization, and to develop "light-yet-sweet" generalizable models: (i) light: conceptually simple in methodology and easy to implement in practice, (ii) sweet: having clear intuitions and non-trivial theoretical guarantees.
Over the years, I always find myself fascinated by geometric invariance, symmetry, structures (graph, causality) and how they can benefit generalization as a guiding principle. Recently, I become very passionate about large language models (general intelligence) and generative modeling of the physicual world (physical intelligence). More specifically, I try to understand how LLMs perform reasoning and how to improve it in various scenarios.
I always believe in two principles in my research: (i) insight must precede application, and (ii) everything should be made as simple as possible, but not simpler. I try to follow certain research values.
I take great pleasure to (co-)mentor a few talented and highly motivated students. Mentoring and working with junior students is truely a privilege, and I constantly learn from and get inspired by them. I am fortunate to work with (time-wise order):
- Yamei Chen (2024 - now)
- M.S. student at Technical University of Munich
- Zeju Qiu (2024 - now)
- Ph.D. student at MPI for Intelligent Systems
- Tim Z. Xiao (2024 - now)
- Ph.D. student at University of Tübingen
Alumni list (nothing is more rewarding than seeing my mentees succeed)
Can Large Language Models Understand Symbolic Graphics Programs?
Zeju Qiu*, Weiyang Liu*, Haiwen Feng*, Zhen Liu**, Tim Z. Xiao**, Katherine M. Collins**, Joshua B. Tenenbaum, Adrian Weller, Michael J. Black, Bernhard Schölkopf
Preprint 2024
Verbalized Machine Learning: Revisiting Machine Learning with Language Models
Tim Z. Xiao, Robert Bamler, Bernhard Schölkopf, Weiyang Liu
Preprint 2024