I'd like to see how power-seeking an LLM is if it's trained on a corpus that excludes everything written by anyone who has ever posted on LessWrong.
@schweeds lol, yeah.
Worse than that though, if the LLM's behavior is generalized from examples, LessWrong is a hotbed of bad examples you wouldn't want your AI to learn from.
@WomanCorn sounds like a good idea! doubt it'll help in any way in the limit (also epistemic contamination is a bitch), but i'll cheer the people who try
@WomanCorn not sure how you could possibly suggest that my AI safety pretentious SV bro VC sex cult is power seeking? just not rational bro