528 Episodo

  1. Why MCP won

    Publicado: 31/3/2025
  2. SWEET-RL: Training LLM Agents for Collaborative Reasoning

    Publicado: 31/3/2025
  3. TheoryCoder: Bilevel Planning with Synthesized World Models

    Publicado: 30/3/2025
  4. Driving Forces in AI: Scaling to 2025 and Beyond (Jason Wei, OpenAI)

    Publicado: 29/3/2025
  5. Expert Demonstrations for Sequential Decision Making under Heterogeneity

    Publicado: 28/3/2025
  6. TextGrad: Backpropagating Language Model Feedback for Generative AI Optimization

    Publicado: 27/3/2025
  7. MemReasoner: Generalizing Language Models on Reasoning-in-a-Haystack Tasks

    Publicado: 27/3/2025
  8. RAFT: In-Domain Retrieval-Augmented Fine-Tuning for Language Models

    Publicado: 27/3/2025
  9. Inductive Biases for Exchangeable Sequence Modeling

    Publicado: 26/3/2025
  10. InverseRLignment: LLM Alignment via Inverse Reinforcement Learning

    Publicado: 26/3/2025
  11. Prompt-OIRL: Offline Inverse RL for Query-Dependent Prompting

    Publicado: 26/3/2025
  12. Alignment from Demonstrations for Large Language Models

    Publicado: 25/3/2025
  13. Q♯: Distributional RL for Optimal LLM Post-Training

    Publicado: 18/3/2025
  14. Scaling Test-Time Compute Without Verification or RL is Suboptimal

    Publicado: 14/3/2025
  15. Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning

    Publicado: 14/3/2025
  16. Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning

    Publicado: 14/3/2025
  17. Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

    Publicado: 14/3/2025
  18. Revisiting Superficial Alignment Hypothesis

    Publicado: 14/3/2025
  19. Diagnostic uncertainty: teaching language Models to describe open-ended uncertainty

    Publicado: 14/3/2025
  20. Language Model Personalization via Reward Factorization

    Publicado: 14/3/2025

26 / 27

Cut through the noise. We curate and break down the most important AI papers so you don’t have to.

Visit the podcast's native language site