the paradox of selection: the purity of your selection-pressure is inversely proportional to its strength.

sorta obvious when phrased that way, but the point is that you can increase your selection-pressure over the true target by additionally selecting for increasingly less precise proxies. this increases the rate of true-positives and decreases false-negatives, but it also increases false-positives (confounders), and sometimes at a higher marginal rate.

in the context of AI alignment, it implies that the most aligned training-regimes are also likely to be really weak (and therefore expensive).

EXAMPLES:

- if u pay ppl to do X, u increase the number of ppl who do X, but u also dilute the field bc now ppl do X for monetary incentives PLUS intrinsic incentives, wheareas bfr it was only the latter.

- if u concentrate hard on finding ideas related to X, u increase the rate at which u become aware of X-related ideas, but u also decrease the threshold of X-relatedness required for becoming aware of them. thus, if u want to maximize the quality/purity of ur X-related ideas, u may wish to *avoid* looking for them too hard. this is the advantage of serendipity as an explicit search-strategy.

Follow

- the highest-avg-IQ academic subjects are mathematics and philosophy *because* they're also *less* financially profitable (thus, ppl go into them bc they're rly intellectually interested in them). the statistics doesn't seem to bear this out, but that's bc there are confounders—the underlying pattern still holds. :p

- more idr

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one