Follow

From Twitter 

RT @CollinBurns4
How can we figure out if what a language model says is true, even when human evaluators can’t easily tell?

We show (arxiv.org/abs/2212.03827) that we can identify whether text is true or false directly from a model’s *unlabeled activations*. 🧵

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one