Weiyang Liu

 

Google Scholar   Github   Twitter

 

University of Cambridge
Max Planck Institute for Intelligent Systems

About Me

I am currently conducting research at Cambridge and MPI Tübingen with Adrian Weller and Bernhard Schölkopf. As a member of the advising team at MeshCapade, I also work closely with Michael J. Black. Previously, I spent wonderful years at Georgia Tech. I have also spent time at Google Brain, Nvidia Research, and MERL.

I work on principled modeling of inductive bias in machine learning. My research seeks to understand how inductive bias determines generalization, and to develop "light-yet-sweet" generalizable models: (i) light: conceptually simple in methodology and easy to implement in practice, (ii) sweet: having clear intuitions and non-trivial theoretical guarantees.

Over the years, I always find myself fascinated by geometric invariance, symmetry, structures (graph, causality) and how they can benefit generalization as a guiding principle. Recently, I become very passionate about large language models (general intelligence) and generative modeling of the physicual world (physical intelligence). More specifically, I try to understand how LLMs perform reasoning and how to improve it in various scenarios.

I always believe in two principles in my research: (i) insight must precede application, and (ii) everything should be made as simple as possible, but not simpler. I try to follow certain research values.

    - Focus on creating novel ideas, not publishing papers
    - Follow curiosity and passion, not trends
    - Ideas are not owned, but come with debts to those who came before
    - Ideas become stronger when shared, discussed and criticized
    - Life is surprisingly short, so solve problems that interest and excite you most
    - It is good to be quick, but it is more important to be deep
    - Think like an amateur, do as an expert
    - This is not only about how to do research, but also how to live your life

Students

I take great pleasure to (co-)mentor a few talented and highly motivated students. Mentoring and working with junior students is truely a privilege, and I constantly learn from and get inspired by them. I am fortunate to work with (time-wise order):

   - Yamei Chen (2024 - now)
       - M.S. student at Technical University of Munich
   - Zeju Qiu (2024 - now)
       - Ph.D. student at MPI for Intelligent Systems
   - Tim Z. Xiao (2024 - now)
       - Ph.D. student at University of Tübingen

Alumni list (nothing is more rewarding than seeing my mentees succeed)

     - Gege Gao (2023 - 2024): research intern
         - Ph.D. student at University of Tübingen
     - Zeju Qiu (2022 - 2024): master thesis student
         - M.S. at Technical University of Munich
         - Next: Ph.D. student at MPI for Intelligent Systems
     - Longhui Yu (2022 - 2024): research intern
         - M.S. at Peking University
         - Next: Ph.D. student at University of Toronto
     - Zhen Liu (2017 - 2019, 2022 - 2024): research intern
         - M.S. at Georgia Tech → Ph.D. at Mila & University of Montreal
         - Next: Assistant Professor at Chinese University of Hong Kong, Shenzhen

Recent Highlight

Can Large Language Models Understand Symbolic Graphics Programs?
Zeju Qiu*, Weiyang Liu*, Haiwen Feng*, Zhen Liu**, Tim Z. Xiao**, Katherine M. Collins**, Joshua B. Tenenbaum, Adrian Weller, Michael J. Black, Bernhard Schölkopf

Preprint 2024

arXiv | code | project | bib

  @article{qiu2024sgpbench,
      title={Can Large Language Models Understand Symbolic Graphics Programs?},
      author={Qiu, Zeju and Liu, Weiyang and Feng, Haiwen and Liu, Zhen and Xiao, Tim Z and Collins, Katherine M 
        and Tenenbaum, Joshua B and Weller, Adrian and Black, Michael J and Sch{\"o}lkopf, Bernhard},
      journal={arXiv preprint arXiv:2408.08313},
      year={2024}}

Verbalized Machine Learning: Revisiting Machine Learning with Language Models
Tim Z. Xiao, Robert Bamler, Bernhard Schölkopf, Weiyang Liu

Preprint 2024

arXiv | code | project | bib

  @article{xiao2024verbalized,
      title={Verbalized Machine Learning: Revisiting Machine Learning with Language Models},
      author={Xiao, Tim Z and Bamler, Robert and Sch{\"o}lkopf, Bernhard and Liu, Weiyang},
      journal={arXiv preprint arXiv:2406.04344},
      year={2024}}

Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision
Zhiqing Sun*, Longhui Yu*, Yikang Shen, Weiyang Liu, Yiming Yang, Sean Welleck, Chuang Gan

NeurIPS 2024

arXiv | code | project | bib

  @article{sun2024easy,
      title={Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision},
      author={Sun, Zhiqing and Yu, Longhui and Shen, Yikang and Liu, Weiyang and Yang, Yiming and Welleck, Sean and Gan, Chuang},
      journal={arXiv preprint arXiv:2403.09472},
      year={2024}}

Publication

Last updated on 27th May 2024.
This site has been visisted Free Web Counter times in total.