AI hallucinations: Turn on, tune in, beep boop
Quartz Obsession - Un pódcast de Quartz
Categorías:
ChatGPT isn’t always right. In fact, it’s often very wrong, giving faulty biographical information about a person or whiffing on the answers to simple questions. But instead of saying it doesn’t know, ChatGPT often makes stuff up. Chatbots can’t actually lie, but researchers sometimes call these untruthful performances “hallucinations”—not quite a lie, but a vision of something that isn’t there. So, what’s really happening here and what does it tell us about the way that AI systems err? Presented by Deloitte Episode art by Vicky Leta