.png)
Consistently Candid
AI safety, philosophy and other things.
Podcasting since 2023 • 19 episodes
Consistently Candid
Latest Episodes
#19 Gabe Alfour on why AI alignment is hard, what it would mean to solve it & what ordinary people can do about existential risk
Gabe Alfour is a co-founder of Conjecture and an advisor to Control AI, both organisations working to reduce risks from advanced AI. We discussed wh...
•
1:36:40
.png)
#18 Nathan Labenz on reinforcement learning, reasoning models, emergent misalignment & more
A lot has happened in AI since the last time I spoke to Nathan Labenz of The Cognitive Revolution, so I invited him back on for a whistlestop tour of the most important developments we've seen o...
•
1:46:17
.png)
#17 Fun Theory with Noah Topper
The Fun Theory Sequence is one of Eliezer Yudkowsky's cheerier works, and considers questions such as 'how much fun is there in the universe?', 'are we having fun yet' and 'could we be...
•
1:25:53
.png)