One interesting thing about https://www.lesswrong.com/posts/QvwSr5LsxyDeaPK5s/existential-risk-from-ai-survey-results is that some answers say existential risk from insufficient technical AI safety research is greater than existential risk from unaligned AI, possible if AI safety research also helps reduce other xrisks as well