Ruibo Liu
Moonshot AI research.
Research Areas
Authored Publications
Sort By
VaultGemma
Lynn Chua
Prem Eruvbetine
Chiyuan Zhang
Thomas Mesnard
Borja De Balle Pigem
Daogao Liu
Amer Sinha
Pritish Kamath
Yangsibo Huang
Christopher A. Choquette-Choo
George Kaissis
Armand Joulin
Da Yu
Ryan McKenna
arxiv (2025)
Preview abstract
In this work, we present VaultGemma 1B, a model based on the Gemma family of models fully trained with differential privacy. VaultGemma 1B is 1 billion parameter pretrained model based on the Gemma 2 series of models and uses the same dataset for training. We will be releasing a tech report and the weights of this model.
View details
Mind's Eye: Grounded Language Model Reasoning through Simulation
Jason Wei
Shixiang Shane Gu
Soroush Vosoughi
ICLR 2023 (2022)
Preview abstract
Successful and effective communication between humans and AI relies on a shared experience of the world. By training solely on written text, current language models (LMs) miss the grounded experience of humans in the real-world—their failure to relate language to the physical world causes knowledge to be misrepresented and obvious mistakes in their reasoning. We present Mind's Eye, a paradigm to ground language model reasoning in the physical world. Given a physical reasoning question, we use a computational physics engine (DeepMind’s MuJoCo) to simulate the possible outcomes, and then use the simulation results as part of the input, which enables language models to perform reasoning. Experiments on 39 tasks in a physics alignment benchmark demonstrate that Mind's Eye can improve reasoning ability by a large margin (27.9% zero-shot, and 46.0% few-shot absolute accuracy improvement on average). Smaller language models armed with Mind's Eye can obtain similar performance to models that are 100× larger. Finally, we confirm the robustness of Mind's Eye through ablation studies.
View details