522 Episodo

  1. Prompting Strategies for Enabling Large Language Models to Infer Causation from Correlation

    Publicado: 24/5/2025
  2. The Parallel Knowledge Gradient Method for Batch Bayesian Optimization

    Publicado: 24/5/2025
  3. FunBO: Discovering Acquisition Functions for Bayesian Optimization with FunSearch

    Publicado: 24/5/2025
  4. Automated Social Science: A Structural Causal Model-Based Approach

    Publicado: 24/5/2025
  5. Causal Interpretation of Transformer Self-Attention

    Publicado: 24/5/2025
  6. A Causal World Model Underlying Next Token Prediction: Exploring GPT in a Controlled Environment

    Publicado: 24/5/2025
  7. Trace is the Next AutoDiff: Generative Optimization with Rich Feedback, Execution Traces, and LLMs

    Publicado: 24/5/2025
  8. Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better, Even Mid-Generation

    Publicado: 24/5/2025
  9. Prompts from Reinforcement Learning (PRL)

    Publicado: 24/5/2025
  10. Logits are All We Need to Adapt Closed Models

    Publicado: 24/5/2025
  11. Large Language Models Are (Bayesian) Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning

    Publicado: 23/5/2025
  12. Inference-Time Intervention: Eliciting Truthful Answers from a Language Model

    Publicado: 23/5/2025
  13. From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models

    Publicado: 23/5/2025
  14. LLM In-Context Learning as Kernel Regression

    Publicado: 23/5/2025
  15. Personalizing LLMs via Decode-Time Human Preference Optimization

    Publicado: 23/5/2025
  16. Almost Surely Safe LLM Inference-Time Alignment

    Publicado: 23/5/2025
  17. Survey of In-Context Learning Interpretation and Analysis

    Publicado: 23/5/2025
  18. From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models

    Publicado: 23/5/2025
  19. LLM In-Context Learning as Kernel Regression

    Publicado: 23/5/2025
  20. Where does In-context Learning Happen in Large Language Models?

    Publicado: 23/5/2025

15 / 27

Cut through the noise. We curate and break down the most important AI papers so you don’t have to.

Visit the podcast's native language site