#78- RAFT: Why just to use RAG if you can also fine tune?

Life with AI - Un pódcast de Filipe Lauar - Jueves

Categorías:

Hello, in this episode I talk a Retrieval Aware Fine Tuning (RAFT), a paper that proposes a new technique to use both domain specific fine-tuning and RAG to improve the retrieval capabilities of LLMs. In the episode I also talk about another paper that is called RAFT, but this time Reward rAnking Fine Tuning, which proposes a new technique to perform RLHF without the convergence problems of Reinforcement Learning. Retrieval Aware Fine Tuning: https://arxiv.org/abs/2403.10131v1 Reward rAnking Fine Tuning: https://arxiv.org/pdf/2304.06767.pdf Instagram of the podcast: https://www.instagram.com/podcast.lifewithai Linkedin of the podcast: https://www.linkedin.com/company/life-with-ai

Visit the podcast's native language site