Weiyang Liu

 

Google Scholar   Github   Twitter

 

University of Cambridge
Max Planck Institute for Intelligent Systems

About Me

I am currently conducting research at Cambridge and MPI Tübingen with Adrian Weller and Bernhard Schölkopf. As a member of the advising team at MeshCapade, I also work closely with Michael J. Black. Previously, I spent wonderful years at Georgia Tech. I have also spent time at Google Brain, Nvidia Research, and MERL.

I work on principled modeling of inductive bias in machine learning. My research seeks to understand how inductive bias determines generalization, and to develop "light-yet-sweet" generalizable models: (i) light: conceptually simple in methodology and easy to implement in practice, (ii) sweet: having clear intuitions and non-trivial theoretical guarantees.

Over the years, I always find myself fascinated by geometric invariance, symmetry, structures (graph, causality) and how they can benefit generalization as a guiding principle. More recently, I become very passionate about foundation models (how to simulate human-level intelligence) and 3D/4D generative modeling (how to recreate and simulate the physical world).

I always believe in two principles in my research: (i) insight must precede application, and (ii) everything should be made as simple as possible, but not simpler. I try to follow certain research values.

I am on the academic job market. Feel free to reach out if there is a good fit!

    - Focus on creating novel ideas, not publishing papers
    - Follow curiosity and passion, not trends
    - Ideas are not owned, but come with debts to those who came before
    - Ideas become stronger when shared, discussed and criticized
    - Life is surprisingly short, so solve problems that interest and excite you most
    - It is good to be quick, but it is more important to be deep
    - Think like an amateur, do as an expert
    - This is not only about how to do research, but also how to live your life

Recent Highlight

Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision
Zhiqing Sun*, Longhui Yu*, Yikang Shen, Weiyang Liu, Yiming Yang, Sean Welleck, Chuang Gan

Preprint 2024

arXiv | code | project | bib

  @article{sun2024easy,
      title={Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision},
      author={Sun, Zhiqing and Yu, Longhui and Shen, Yikang and Liu, Weiyang and Yang, Yiming and Welleck, Sean and Gan, Chuang},
      journal={arXiv preprint arXiv:2403.09472},
      year={2024}}

GraphDreamer: Compositional 3D Scene Synthesis from Scene Graphs
Gege Gao, Weiyang Liu*, Anpei Chen, Andreas Geiger, Bernhard Schölkopf

CVPR 2024

arXiv | code | project | bib

  @InProceedings{gao2024graphdreamer,
      author = {Gao, Gege and Liu, Weiyang and Chen, Anpei and Geiger, Andreas and Sch{\"o}lkopf, Bernhard},
      title = {GraphDreamer: Compositional 3D Scene Synthesis from Scene Graphs},
      booktitle = {CVPR},
      year = {2024}}

Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization
Weiyang Liu*, Zeju Qiu*, Yao Feng**, Yuliang Xiu**, Yuxuan Xue**, Longhui Yu**, Haiwen Feng, Zhen Liu, Juyeon Heo, Songyou Peng, Yandong Wen, Michael J. Black, Adrian Weller, Bernhard Schölkopf

ICLR 2024

arXiv | code | project | openreview | bib

  @InProceedings{liu2024boft,
      author = {Liu, Weiyang and Qiu, Zeju and Feng, Yao and Xiu, Yuliang and Xue, Yuxuan and Yu, Longhui and Feng, Haiwen and Liu, Zhen 
        and Heo, Juyeon and Peng, Songyou and Wen, Yandong and Black, Michael J. and Weller, Adrian and Sch{\"o}lkopf, Bernhard},
      title = {Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization},
      booktitle = {ICLR},
      year = {2024}}

Ghost on the Shell: An Expressive Representation of General 3D Shapes
Zhen Liu, Yao Feng*, Yuliang Xiu*, Weiyang Liu*, Liam Paull, Michael J. Black, Bernhard Schölkopf

ICLR 2024   Oral

arXiv | code | project | openreview | bib

  @InProceedings{Liu2024gshell,
      title={Ghost on the Shell: An Expressive Representation of General 3D Shapes},
      author={Liu, Zhen and Feng, Yao and Xiu, Yuliang and Liu, Weiyang 
        and Paull, Liam and Black, Michael J and Sch{\"o}lkopf, Bernhard},
      booktitle = {ICLR},
      year={2024}}

MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Longhui Yu, Weisen Jiang, Han Shi, J. Yu, Z. Liu, Yu Zhang, James Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu*

ICLR 2024   Spotlight

arXiv | code | project | huggingface | openreview | bib

  @InProceedings{Yu2024MetaMath,
      title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models},
      author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying 
        and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang},
      booktitle = {ICLR},
      year={2024}}

Mentoring

I take great pleasure to (co-)mentor a few talented and highly motivated students. Mentoring and working with junior students is truely a privilege, and I always learn from and get inspired by them. I am fortunate to work with (time-wise order):

   - Zeju Qiu (2024 - now)
       - Ph.D. student at MPI for Intelligent Systems
   - Tim Z. Xiao (2024 - now)
       - Ph.D. student at University of Tübingen
   - Gege Gao (2023 - now)
       - Ph.D. student at ETH Zürich & University of Tübingen

Former mentees (nothing is more rewarding than seeing my mentees succeed):

   - Zeju Qiu (2022 - 2024): master thesis student
       - M.S. at Technical University of Munich
       - Next: Ph.D. student at MPI for Intelligent Systems
   - Longhui Yu (2022 - 2024): research intern
       - M.S. at Peking University
       - Ph.D. offers from Caltech, UToronto
   - Zhen Liu (2017 - 2019, 2022 - 2024): research intern
       - M.S. at Georgia Tech
       - Next: Ph.D. student at Mila & University of Montreal

Publication

Last updated on 27th October 2023.
This site has been visisted Free Web Counter times in total.