I'd like to see how power-seeking an LLM is if it's trained on a corpus that excludes everything written by anyone who has ever posted on LessWrong.
@WomanCorn not sure how you could possibly suggest that my AI safety pretentious SV bro VC sex cult is power seeking? just not rational bro
@schweeds lol, yeah.
Worse than that though, if the LLM's behavior is generalized from examples, LessWrong is a hotbed of bad examples you wouldn't want your AI to learn from.
a Schelling point for those who seek one