93 - Will machines impede moral progress?
Philosophical Disquisitions - Un pódcast de John Danaher
Categorías:
Thomas Sinclair (left), Ben Kenward (right) Lots of people are worried about the ethics of AI. One particular area of concern is whether we should program machines to follow existing normative/moral principles when making decisions. But social moral values change over time. Should machines not be designed to allow for such changes? If machines are programmed to follow our current values will they impede moral progress? In this episode, I talk to Ben Kenward and Thomas Sinclair about this issue. Ben is a Senior Lecturer in Psychology at Oxford Brookes University in the UK. His research focuses on ecological psychology, mainly examining environmental activism such as the Extinction Rebellion movement of which he is a part. Thomas is a Fellow and Tutor in Philosophy at Wadham College, Oxford, and an Associate Professor of Philosophy at Oxford's Faculty of Philosophy. His research and teaching focus on questions in moral and political philosophy. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show NotesTopics discussed incude: What is a moral value?What is a moral machine?What is moral progress?Has society progress, morally speaking, in the past?How can we design moral machines?What's the problem with getting machines to follow our current moral consensus?Will people over-defer to machines? Will they outsource their moral reasoning to machines?Why is a lack of moral progress such a problem right now? Relevant LinksThomas's webpageBen's webpage'Machine morality, moral progress and the looming environmental disaster' by Ben and Tom