37 Episodo

  1. Episode 37: Rylan Schaeffer, Stanford: On investigating emergent abilities and challenging dominant research ideas

    Publicado: 18/9/2024
  2. Episode 36: Ari Morcos, DatologyAI: On leveraging data to democratize model training

    Publicado: 11/7/2024
  3. Episode 35: Percy Liang, Stanford: On the paradigm shift and societal effects of foundation models

    Publicado: 9/5/2024
  4. Episode 34: Seth Lazar, Australian National University: On legitimate power, moral nuance, and the political philosophy of AI

    Publicado: 12/3/2024
  5. Episode 33: Tri Dao, Stanford: On FlashAttention and sparsity, quantization, and efficient inference

    Publicado: 9/8/2023
  6. Episode 32: Jamie Simon, UC Berkeley: On theoretical principles for how neural networks learn and generalize

    Publicado: 22/6/2023
  7. Episode 31: Bill Thompson, UC Berkeley, on how cultural evolution shapes knowledge acquisition

    Publicado: 29/3/2023
  8. Episode 30: Ben Eysenbach, CMU, on designing simpler and more principled RL algorithms

    Publicado: 23/3/2023
  9. Episode 29: Jim Fan, NVIDIA, on foundation models for embodied agents, scaling data, and why prompt engineering will become irrelevant

    Publicado: 9/3/2023
  10. Episode 28: Sergey Levine, UC Berkeley, on the bottlenecks to generalization in reinforcement learning, why simulation is doomed to succeed, and how to pick good research problems

    Publicado: 1/3/2023
  11. Episode 27: Noam Brown, FAIR, on achieving human-level performance in poker and Diplomacy, and the power of spending compute at inference time

    Publicado: 9/2/2023
  12. Episode 26: Sugandha Sharma, MIT, on biologically inspired neural architectures, how memories can be implemented, and control theory

    Publicado: 17/1/2023
  13. Episode 25: Nicklas Hansen, UCSD, on long-horizon planning and why algorithms don't drive research progress

    Publicado: 16/12/2022
  14. Episode 24: Jack Parker-Holder, DeepMind, on open-endedness, evolving agents and environments, online adaptation, and offline learning

    Publicado: 6/12/2022
  15. Episode 23: Celeste Kidd, UC Berkeley, on attention and curiosity, how we form beliefs, and where certainty comes from

    Publicado: 22/11/2022
  16. Episode 22: Archit Sharma, Stanford, on unsupervised and autonomous reinforcement learning

    Publicado: 17/11/2022
  17. Episode 21: Chelsea Finn, Stanford, on the biggest bottlenecks in robotics and reinforcement learning

    Publicado: 3/11/2022
  18. Episode 20: Hattie Zhou, Mila, on supermasks, iterative learning, and fortuitous forgetting

    Publicado: 14/10/2022
  19. Episode 19: Minqi Jiang, UCL, on environment and curriculum design for general RL agents

    Publicado: 19/7/2022
  20. Episode 18: Oleh Rybkin, UPenn, on exploration and planning with world models

    Publicado: 11/7/2022

1 / 2

Technical discussions with deep learning researchers who study how to build intelligence. Made for researchers, by researchers.

Visit the podcast's native language site