Best AI papers explained
Un pódcast de Enoch H. Kang
522 Episodo
-  
Auto-Differentiating Any LLM Workflow: A Farewell to Manual Prompting
Publicado: 22/5/2025 -  
metaTextGrad: Learning to learn with language models as optimizers
Publicado: 22/5/2025 -  
Semantic Operators: A Declarative Model for Rich, AI-based Data Processing
Publicado: 22/5/2025 -  
Isolated Causal Effects of Language
Publicado: 22/5/2025 -  
Sleep-time Compute: Beyond Inference Scaling at Test-time
Publicado: 22/5/2025 -  
J1: Incentivizing Thinking in LLM-as-a-Judge
Publicado: 22/5/2025 -  
ShiQ: Bringing back Bellman to LLMs
Publicado: 22/5/2025 -  
Policy Learning with a Natural Language Action Space: A Causal Approach
Publicado: 22/5/2025 -  
Multi-Objective Preference Optimization: Improving Human Alignment of Generative Models
Publicado: 22/5/2025 -  
End-to-End Learning for Stochastic Optimization: A Bayesian Perspective
Publicado: 21/5/2025 -  
TEXTGRAD: Automatic Differentiation via Text
Publicado: 21/5/2025 -  
Steering off Course: Reliability Challenges in Steering Language Models
Publicado: 20/5/2025 -  
Past-Token Prediction for Long-Context Robot Policies
Publicado: 20/5/2025 -  
Recovering Coherent Event Probabilities from LLM Embeddings
Publicado: 20/5/2025 -  
Systematic Meta-Abilities Alignment in Large Reasoning Models
Publicado: 20/5/2025 -  
Predictability Shapes Adaptation: An Evolutionary Perspective on Modes of Learning in Transformers
Publicado: 20/5/2025 -  
Efficient Exploration for LLMs
Publicado: 19/5/2025 -  
Rankers, Judges, and Assistants: Towards Understanding the Interplay of LLMs in Information Retrieval Evaluation
Publicado: 18/5/2025 -  
Bayesian Concept Bottlenecks with LLM Priors
Publicado: 17/5/2025 -  
Transformers for In-Context Reinforcement Learning
Publicado: 17/5/2025 
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.
