One interesting thing about lesswrong.com/posts/QvwSr5Lsxy is that some answers say existential risk from insufficient technical AI safety research is greater than existential risk from unaligned AI, possible if AI safety research also helps reduce other xrisks as well

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one