Personalizing AI Models with Kelvin Guu, Senior Staff Research Scientist, Google Brain

No Priors: Artificial Intelligence | Technology | Startups - Un pódcast de Conviction - Jueves

Categorías:

How do you personalize AI models? A popular school of thought in AI is to just dump all the data you need into pre-training or fine tuning. But that may be less efficient and less controllable than alternatives — using AI models as a reasoning engine against external data sources. Kelvin Guu, Senior Staff Research Scientist at Google, joins Sarah and Elad this week to talk about retrieval, memory, training data attribution and model orchestration. At Google, he led some of the first efforts to leverage pre-trained LMs and neural retrievers, with >30 launches across multiple products. He has done some of the earliest work on retrieval-augmented language models (REALM) and training LLMs to follow instructions (FLAN). No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode. Show Links: Kelvin Guu Website Google Scholar FLAN: Finetuned Language Models Are Zero-Shot Learners Simfluence: Modeling the Influence of Individual Training Examples by Simulating Training Runs ROME: Locating and Editing Factual Associations in GPT Branch-Train-Merge: Scaling Expert Language Models with Unsupervised Domain Discovery Large Language Models Struggle to Learn Long-Tail Knowledge  Sign up for new podcasts every week. Email feedback to [email protected] Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @Kelvin_Guu Show Notes: [1:44] - Kelvin’s background in math, statistics and natural language processing at Stanford [3:24] - The questions driving the REALM Paper [7:08] - Frameworks around retrieval augmentation & expert models [10:16] - Why is modularity important [11:36] - FLAN Paper and instruction following [13:28] - Updating model weights in real time and other continuous learning methods [15:08] - Simfluence Paper & explainability with large language models [18:11] - ROME paper, “Model Surgery” exciting research areas [19:51] - Personal opinions and thoughts on AI agents & research [24:59] - How the human brain compares to AGI regarding memory and emotions [28:08] - How models become more contextually available [30:45] - Accessibility of models [33:47] - Advice to future researchers

Visit the podcast's native language site