AICAST

Eirin Evjen and Markus Anderljung, AI Safety - Human values aligned with AI part 1/2

September 21, 2019 Season 1 Episode 7
AICAST
Eirin Evjen and Markus Anderljung, AI Safety - Human values aligned with AI part 1/2
Chapters
AICAST
Eirin Evjen and Markus Anderljung, AI Safety - Human values aligned with AI part 1/2
Sep 21, 2019 Season 1 Episode 7
Frank Vevle, Michael Løiten, Eirin Evjen, Markus Anderljung
Aligning artificial general intelligence (AGI) with human values
Show Notes

In this first episode of two, we talk about Human values and how we should start planning and implementing when we as humans start building artificial general intelligence (AGI). The goal of long-term artificial intelligence safety is to ensure that advanced AI systems are aligned with human values — that they reliably do things that people want them to do.

You will get the perspective of Eirin Evjen, Executive Director of Effective Altruism Norway and Markus Anderljung, Project Manager for Operations & Policy Engagement in Future of Humanity institute.



×

Listen to this podcast on