518 Episodo

  1. Planning anything with rigor: general-purpose zero-shot planning with llm-based formalized programming

    Publicado: 28/5/2025
  2. Automated Design of Agentic Systems

    Publicado: 28/5/2025
  3. What’s the Magic Word? A Control Theory of LLM Prompting

    Publicado: 28/5/2025
  4. BoNBoN Alignment for Large Language Models and the Sweetness of Best-of-n Sampling

    Publicado: 27/5/2025
  5. RL with KL penalties is better viewed as Bayesian inference

    Publicado: 27/5/2025
  6. Asymptotics of Language Model Alignment

    Publicado: 27/5/2025
  7. Qwen 2.5, RL, and Random Rewards

    Publicado: 27/5/2025
  8. Theoretical guarantees on the best-of-n alignment policy

    Publicado: 27/5/2025
  9. Score Matching Enables Causal Discovery of Nonlinear Additive Noise Models

    Publicado: 27/5/2025
  10. Improved Techniques for Training Score-Based Generative Models

    Publicado: 27/5/2025
  11. Your Pre-trained LLM is Secretly an Unsupervised Confidence Calibrator

    Publicado: 27/5/2025
  12. AlphaEvolve: A coding agent for scientific and algorithmic discovery

    Publicado: 27/5/2025
  13. Harnessing the Universal Geometry of Embeddings

    Publicado: 27/5/2025
  14. Goal Inference using Reward-Producing Programs in a Novel Physics Environment

    Publicado: 27/5/2025
  15. Trial-Error-Explain In-Context Learning for Personalized Text Generation

    Publicado: 27/5/2025
  16. Reinforcement Learning for Reasoning in Large Language Models with One Training Example

    Publicado: 27/5/2025
  17. Test-Time Reinforcement Learning (TTRL)

    Publicado: 27/5/2025
  18. Interpreting Emergent Planning in Model-Free Reinforcement Learning

    Publicado: 26/5/2025
  19. Agentic Reward Modeling_Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems

    Publicado: 26/5/2025
  20. Beyond Reward Hacking: Causal Rewards for Large LanguageModel Alignment

    Publicado: 26/5/2025

13 / 26

Cut through the noise. We curate and break down the most important AI papers so you don’t have to.

Visit the podcast's native language site