Arora Research Lab

Arora Research Lab @ Princeton

Team / Publications / Blogs

Research

Our research lab develops conceptual understanding of AI models, including training techniques, datasets, model interpretability and model evaluations. Recent works involve new ways of training and evaluating models using “skills”, which are themselves elicited from powerful AI models, and in turn can be used to generate very effective pipelines for improving AI models using synthetic training data. Many of our papers involve mathematical analysis as well as experiments.


News

[12/2024] Quanta Magazine featured our work on emergence of skill compositionality (and its limitations) in LLMs among the CS breakthroughs of the year!
[09/2024] Congrats to Sadhika Malladi on being named 2025 Siebel Scholar!
[09/2024] New paper Instruct-SkillMix: A Powerful Pipeline for LLM Instruction Tuning was accepted to Compositional Learning Workshop at NeurIPS 2024 (Oral), FITML Workshop at NeurIPS 2024!
[09/2024] New paper Can Models Learn Skill Composition from Examples? was accepted to NeurIPS 2024!
[09/2024] New paper Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt Templates was accepted to NeurIPS 2024!
[09/2024] New paper AI-Assisted Generation of Difficult Math Questions was accepted to MATH-AI Workshop at NeurIPS 2024!
[05/2024] New paper Trainable transformer in transformer was accepted to ICML 2024!
[05/2024] New paper LESS: Selecting Influential Data for Targeted Instruction Tuning was accepted to ICML 2024!
[01/2024] Quanta Magazine features our work on emergence of complex skills in LLMs and SkillMix evaluation!
[01/2024] New paper Skill-Mix: A flexible and expandable family of evaluations for AI models was accepted to ICLR 2024!

 

News archives


Selected Talks