Unbelievably simple recent ideas in ML, often top-conference fodder:

To detect if text comes from LM X, randomly modify it and get X's logprobs of the original and the mod.

If p(original) > p(mod), classify as LM generated.

arxiv.org/abs/2301.11305v1

"to increase performance by 10% absolute, just take the majority-vote answer of several LM answers"

openreview.net/forum?id=1PL1NI

Follow

"to reduce resource use by 50%(!), use a large model to do rejection sampling of small models' output"

arxiv.org/abs/2302.01318

"to find hyperparams about twice as fast, start a bunch of networks training and after a while copy the weights of the one improving fastest. repeat"

deepmind.com/blog/population-b

I guess chain of thought is itself one of these.

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one