Best AI papers explained
Un pódcast de Enoch H. Kang
522 Episodo
-  
Prompting Strategies for Enabling Large Language Models to Infer Causation from Correlation
Publicado: 24/5/2025 -  
The Parallel Knowledge Gradient Method for Batch Bayesian Optimization
Publicado: 24/5/2025 -  
FunBO: Discovering Acquisition Functions for Bayesian Optimization with FunSearch
Publicado: 24/5/2025 -  
Automated Social Science: A Structural Causal Model-Based Approach
Publicado: 24/5/2025 -  
Causal Interpretation of Transformer Self-Attention
Publicado: 24/5/2025 -  
A Causal World Model Underlying Next Token Prediction: Exploring GPT in a Controlled Environment
Publicado: 24/5/2025 -  
Trace is the Next AutoDiff: Generative Optimization with Rich Feedback, Execution Traces, and LLMs
Publicado: 24/5/2025 -  
Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better, Even Mid-Generation
Publicado: 24/5/2025 -  
Prompts from Reinforcement Learning (PRL)
Publicado: 24/5/2025 -  
Logits are All We Need to Adapt Closed Models
Publicado: 24/5/2025 -  
Large Language Models Are (Bayesian) Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning
Publicado: 23/5/2025 -  
Inference-Time Intervention: Eliciting Truthful Answers from a Language Model
Publicado: 23/5/2025 -  
From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models
Publicado: 23/5/2025 -  
LLM In-Context Learning as Kernel Regression
Publicado: 23/5/2025 -  
Personalizing LLMs via Decode-Time Human Preference Optimization
Publicado: 23/5/2025 -  
Almost Surely Safe LLM Inference-Time Alignment
Publicado: 23/5/2025 -  
Survey of In-Context Learning Interpretation and Analysis
Publicado: 23/5/2025 -  
From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models
Publicado: 23/5/2025 -  
LLM In-Context Learning as Kernel Regression
Publicado: 23/5/2025 -  
Where does In-context Learning Happen in Large Language Models?
Publicado: 23/5/2025 
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.
