Weiyang Liu

 

Google Scholar   Github   Twitter

 

University of Cambridge
Max Planck Institute for Intelligent Systems

About Me

I am currently conducting research at Cambridge and MPI Tübingen with Adrian Weller and Bernhard Schölkopf. As a member of the advising team at MeshCapade, I also work closely with Michael J. Black. Previously, I spent wonderful years at Georgia Tech. I have also spent time at Google Brain, Nvidia Research, and MERL.

I work on principled modeling of inductive bias in machine learning. My research seeks to understand how inductive bias determines generalization, and to develop "light-yet-sweet" generalizable models: (i) light: conceptually simple in methodology and easy to implement in practice, (ii) sweet: having clear intuitions and non-trivial theoretical guarantees.

Over the years, I always find myself fascinated by geometric invariance, symmetry, structures (graph, causality) and how they can benefit generalization as a guiding principle. More recently, I become very passionate about foundation models (how to simulate human-level intelligence) and generative modeling of the physicual world (how to recreate and simulate the physical world).

I always believe in two principles in my research: (i) insight must precede application, and (ii) everything should be made as simple as possible, but not simpler. I try to follow certain research values.

    - Focus on creating novel ideas, not publishing papers
    - Follow curiosity and passion, not trends
    - Ideas are not owned, but come with debts to those who came before
    - Ideas become stronger when shared, discussed and criticized
    - Life is surprisingly short, so solve problems that interest and excite you most
    - It is good to be quick, but it is more important to be deep
    - Think like an amateur, do as an expert
    - This is not only about how to do research, but also how to live your life

Students

I will recruit PhD students and RAs from 2025 Fall. Feel free to drop me an email.

I take great pleasure to (co-)mentor a few talented and highly motivated students. Mentoring and working with junior students is truely a privilege, and I constantly learn from and get inspired by them. I am fortunate to work with (time-wise order):

   - Yamei Chen (2024 - now)
       - M.S. student at Technical University of Munich
   - Zeju Qiu (2024 - now)
       - Ph.D. student at MPI for Intelligent Systems
   - Tim Z. Xiao (2024 - now)
       - Ph.D. student at University of Tübingen
   - Gege Gao (2023 - now)
       - Ph.D. student at ETH Zürich & University of Tübingen

Alumni list (nothing is more rewarding than seeing my mentees succeed)

     - Zeju Qiu (2022 - 2024): master thesis student
         - M.S. at Technical University of Munich
         - Next: Ph.D. student at MPI for Intelligent Systems
     - Longhui Yu (2022 - 2024): research intern
         - M.S. at Peking University
         - Next: Ph.D. student at University of Toronto
     - Zhen Liu (2017 - 2019, 2022 - 2024): research intern
         - M.S. at Georgia Tech → Ph.D. at Mila & University of Montreal
         - Next: Assistant Professor at Chinese University of Hong Kong, Shenzhen

Recent Highlight

Verbalized Machine Learning: Revisiting Machine Learning with Language Models
Tim Z. Xiao, Robert Bamler, Bernhard Schölkopf, Weiyang Liu

Preprint 2024

arXiv | code | project | bib

  @article{xiao2024verbalized,
      title={Verbalized Machine Learning: Revisiting Machine Learning with Language Models},
      author={Xiao, Tim Z and Bamler, Robert and Sch{\"o}lkopf, Bernhard and Liu, Weiyang},
      journal={arXiv preprint arXiv:2406.04344},
      year={2024}}

Representational Alignment Supports Effective Machine Teaching
Ilia Sucholutsky, Katherine M. Collins, Maya Malaviya, Nori Jacoby, Weiyang Liu, Theodore R. Sumers, Michalis Korakakis, Umang Bhatt, Mark Ho, Joshua B. Tenenbaum, Bradley C. Love, Zachary A. Pardos, Adrian Weller, Thomas L. Griffiths

Preprint 2024

arXiv | code | project | bib

  @article{sucholutsky2024representational,
      title={Representational Alignment Supports Effective Machine Teaching},
      author={Ilia Sucholutsky and Katherine M. Collins and Maya Malaviya and Nori Jacoby and Weiyang Liu
       and Theodore R. Sumers and Michalis Korakakis and Umang Bhatt and Mark Ho and Joshua B. Tenenbaum
        and Brad Love and Zachary A. Pardos and Adrian Weller and Thomas L. Griffiths},
      journal={arXiv preprint arXiv:2406.04302},
      year={2024}}

Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision
Zhiqing Sun*, Longhui Yu*, Yikang Shen, Weiyang Liu, Yiming Yang, Sean Welleck, Chuang Gan

Preprint 2024

arXiv | code | project | bib

  @article{sun2024easy,
      title={Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision},
      author={Sun, Zhiqing and Yu, Longhui and Shen, Yikang and Liu, Weiyang and Yang, Yiming and Welleck, Sean and Gan, Chuang},
      journal={arXiv preprint arXiv:2403.09472},
      year={2024}}

GraphDreamer: Compositional 3D Scene Synthesis from Scene Graphs
Gege Gao, Weiyang Liu*, Anpei Chen, Andreas Geiger, Bernhard Schölkopf

CVPR 2024

arXiv | code | project | bib

  @InProceedings{gao2024graphdreamer,
      author = {Gao, Gege and Liu, Weiyang and Chen, Anpei and Geiger, Andreas and Sch{\"o}lkopf, Bernhard},
      title = {GraphDreamer: Compositional 3D Scene Synthesis from Scene Graphs},
      booktitle = {CVPR},
      year = {2024}}

Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization
Weiyang Liu*, Zeju Qiu*, Yao Feng**, Yuliang Xiu**, Yuxuan Xue**, Longhui Yu**, Haiwen Feng, Zhen Liu, Juyeon Heo, Songyou Peng, Yandong Wen, Michael J. Black, Adrian Weller, Bernhard Schölkopf

ICLR 2024

arXiv | code | project | huggingface | openreview | talk | slides | bib

  @InProceedings{liu2024boft,
      author = {Liu, Weiyang and Qiu, Zeju and Feng, Yao and Xiu, Yuliang and Xue, Yuxuan and Yu, Longhui and Feng, Haiwen and Liu, Zhen 
        and Heo, Juyeon and Peng, Songyou and Wen, Yandong and Black, Michael J. and Weller, Adrian and Sch{\"o}lkopf, Bernhard},
      title = {Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization},
      booktitle = {ICLR},
      year = {2024}}

MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Longhui Yu, Weisen Jiang, Han Shi, J. Yu, Z. Liu, Yu Zhang, James Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu*

ICLR 2024   Spotlight

arXiv | code | project | huggingface | openreview | bib

  @InProceedings{Yu2024MetaMath,
      title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models},
      author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying 
        and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang},
      booktitle = {ICLR},
      year={2024}}

Ghost on the Shell: An Expressive Representation of General 3D Shapes
Zhen Liu, Yao Feng*, Yuliang Xiu*, Weiyang Liu*, Liam Paull, Michael J. Black, Bernhard Schölkopf

ICLR 2024   Oral

arXiv | code | project | openreview | bib

  @InProceedings{Liu2024gshell,
      title={Ghost on the Shell: An Expressive Representation of General 3D Shapes},
      author={Liu, Zhen and Feng, Yao and Xiu, Yuliang and Liu, Weiyang 
        and Paull, Liam and Black, Michael J and Sch{\"o}lkopf, Bernhard},
      booktitle={ICLR},
      year={2024}}

Publication

Last updated on 27th May 2024.
This site has been visisted Free Web Counter times in total.