hao liu
Hao Liu
PhD student at Berkeley
hao.liu@cs.berkeley.edu
Github | Scholar | Twitter
About
I am a PhD student in EECS at UC Berkeley advised by Prof. Pieter Abbeel in the Berkeley Artificial Intelligence Research (BAIR) Lab.
My research area is machine learning and neural networks, with the goal of developing computationally scalable solutions for generalization. I have been working on modeling and training for excelling in many domains(BlockwiseTransformer, RingAttention, Large World Model, Agentic Transformer) and discovering for going beyond imitating human knowledge (APT, APS, CIC, URLB).
Publications See my Google Scholar page for a complete list.
  • World Model on Million-Length Video And Language With Blockwise RingAttention
    Hao Liu*, Wilson Yan*, Matei Zaharia, Pieter Abbeel
    Arxiv, 2024
    bib | paper | code | project | tl;dr
  • Ring Attention with Blockwise Transformers for Near-Infinite Context
    Hao Liu, Matei Zaharia, Pieter Abbeel
    International Conference on Learning Representations(ICLR), 2024
    bib | paper | code | media | tl;dr
  • Blockwise Parallel Transformer for Large Context Models
    Hao Liu, Pieter Abbeel
    Advances in Neural Information Processing Systems(NeurIPS)(Spotlight Presentation), 2023
    bib | paper | code | tl;dr
  • Language Quantized AutoEncoders: Towards Unsupervised Text-Image Alignment
    Hao Liu, Wilson Yan, Pieter Abbeel
    Advances in Neural Information Processing Systems(NeurIPS), 2023
    bib | paper | code | tl;dr
  • Chain of Hindsight Aligns Language Models with Feedback
    Hao Liu, Carmelo Sferrazza, Pieter Abbeel
    International Conference on Learning Representations(ICLR), 2024
    bib | paper | code | tl;dr
  • Emergent Agentic Transformer from Chain of Hindsight Experience
    Hao Liu, Pieter Abbeel
    International Conference on Machine Learning(ICML), 2023
    bib | paper | tl;dr
  • Masked Autoencoding for Scalable and Generalizable Decision Making
    Fangchen Liu*, Hao Liu*, Aditya Grover, Pieter Abbeel
    Advances in Neural Information Processing Systems(NeurIPS), 2022
    bib | paper | [code | tl;dr
  • Palm up: Playing in the Latent Manifold for Unsupervised Pretraining
    Hao Liu, Tom Zahavy, Volodymyr Mnih, Satinder Singh
    Advances in Neural Information Processing Systems(NeurIPS), 2022
    bib | paper | tl;dr
  • URLB: Unsupervised Reinforcement Learning Benchmark.
    Michael Laskin, Denis Yarats, Hao Liu, Kimin Lee, Albert Zhan, Kevin Lu, Catherine Cang,
    Lerrel Pinto, Pieter Abbeel
    NeurIPS 2021 Track Datasets and Benchmarks, 2021
    bib | paper | code | tl;dr
  • APS: Active Pre-Training with Successor Features
    Hao Liu, Pieter Abbeel
    International Conference on Machine Learning(ICML)(Long Oral Presentation), 2021.
    bib | paper | code
  • Behavior From the Void: Unsupervised Active Pre-Training
    Hao Liu, Pieter Abbeel
    Advances in Neural Information Processing Systems(NeurIPS)(Spotlight Presentation), 2021.
    bib | paper | code | tl;dr
  • Education / Experience
    Teaching and Service