“1. The challenge of unawareness for impartial altruist action guidance: Introduction” by Anthony DiGiovanni
EA Forum Podcast (All audio) - Un pódcast de EA Forum Team

Categorías:
(This sequence assumes basic familiarity with longtermist cause prioritization concepts, though the issues I raise also apply to non-longtermist interventions.) Are EA interventions net-positive from an impartial perspective — one that gives moral weight to all consequences, no matter how distant? What if they’re neither positive, nor negative, nor neutral? Trying to reduce x-risk, improve institutions, or end factory farming might seem robustly positive. After all, we don’t need to be certain in order to do good in expectation. But when we step back to look at how radically far-reaching the impartial perspective is, we start to see a deeper problem than “uncertainty”. This problem is unawareness: many possible consequences of our actions haven’t even occurred to us in much detail, if at all. Why is unawareness a serious challenge for impartial altruists? Well, impartiality entails that we account for all moral patients, and all the most significant impacts [...] ---Outline:(08:03) Bird's-eye view of the sequence(08:34) Introduction to unawareness(12:03) Unawareness vs. uncertainty(16:45) Why not just do what works?(19:00) Case study: Severe unawareness in AI safety(19:23) Vignette(23:14) Where this leaves us(24:42) Acknowledgments(25:20) ReferencesThe original text contained 12 footnotes which were omitted from this narration. --- First published: June 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/a3hnfA9EnYm9bssTZ/1-the-challenge-of-unawareness-for-impartial-altruist-action-1 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.