#noxp the mastodon fandom is dying, reshare this if you’re a true masturbator
One interesting thing about https://www.lesswrong.com/posts/QvwSr5LsxyDeaPK5s/existential-risk-from-ai-survey-results is that some answers say existential risk from insufficient technical AI safety research is greater than existential risk from unaligned AI, possible if AI safety research also helps reduce other xrisks as well
can cause much wailing and gnashing of teeth if it’s something that’s very hard to optimise for
Moved to @TetraspaceGrouping