in the context of AI alignment, it implies that the most aligned training-regimes are also likely to be really weak (and therefore expensive).
- if u concentrate hard on finding ideas related to X, u increase the rate at which u become aware of X-related ideas, but u also decrease the threshold of X-relatedness required for becoming aware of them. thus, if u want to maximize the quality/purity of ur X-related ideas, u may wish to *avoid* looking for them too hard. this is the advantage of serendipity as an explicit search-strategy.
- the highest-avg-IQ academic subjects are mathematics and philosophy *because* they're also *less* financially profitable (thus, ppl go into them bc they're rly intellectually interested in them). the statistics doesn't seem to bear this out, but that's bc there are confounders—the underlying pattern still holds. :p
- more idr
EXAMPLES:
- if u pay ppl to do X, u increase the number of ppl who do X, but u also dilute the field bc now ppl do X for monetary incentives PLUS intrinsic incentives, wheareas bfr it was only the latter.