Best AI papers explained
Un pódcast de Enoch H. Kang
528 Episodo
-
Why MCP won
Publicado: 31/3/2025 -
SWEET-RL: Training LLM Agents for Collaborative Reasoning
Publicado: 31/3/2025 -
TheoryCoder: Bilevel Planning with Synthesized World Models
Publicado: 30/3/2025 -
Driving Forces in AI: Scaling to 2025 and Beyond (Jason Wei, OpenAI)
Publicado: 29/3/2025 -
Expert Demonstrations for Sequential Decision Making under Heterogeneity
Publicado: 28/3/2025 -
TextGrad: Backpropagating Language Model Feedback for Generative AI Optimization
Publicado: 27/3/2025 -
MemReasoner: Generalizing Language Models on Reasoning-in-a-Haystack Tasks
Publicado: 27/3/2025 -
RAFT: In-Domain Retrieval-Augmented Fine-Tuning for Language Models
Publicado: 27/3/2025 -
Inductive Biases for Exchangeable Sequence Modeling
Publicado: 26/3/2025 -
InverseRLignment: LLM Alignment via Inverse Reinforcement Learning
Publicado: 26/3/2025 -
Prompt-OIRL: Offline Inverse RL for Query-Dependent Prompting
Publicado: 26/3/2025 -
Alignment from Demonstrations for Large Language Models
Publicado: 25/3/2025 -
Q♯: Distributional RL for Optimal LLM Post-Training
Publicado: 18/3/2025 -
Scaling Test-Time Compute Without Verification or RL is Suboptimal
Publicado: 14/3/2025 -
Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning
Publicado: 14/3/2025 -
Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning
Publicado: 14/3/2025 -
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Publicado: 14/3/2025 -
Revisiting Superficial Alignment Hypothesis
Publicado: 14/3/2025 -
Diagnostic uncertainty: teaching language Models to describe open-ended uncertainty
Publicado: 14/3/2025 -
Language Model Personalization via Reward Factorization
Publicado: 14/3/2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.
