> one newly synthesized heuristic kept rising in Worth, and finally I looked at it. It was doing no real work at all, but just before the credit/blame assignment phase, it quickly cycled through all the new concepts, and when it found one with high Worth it put its own name down as one of the creditors. Nothing is "wrong" with that policy, except that in the long ran it fails to lead to better results.
Does this already count as an inner optimizer?
(from “The Nature of Heuristics” p. 34)
Though this has problems with comparing across different computing paradigms (what's the trace of a λ-calculus expression to the one of a Turing machine computation?)
This is maybe downstream from taking the functional and not algorithmic view on similarity: Wouldn't we want to *also* examine the traces we get?
A functional definition of algorithm similarity (number of same outputs on same inputs) disregards some "continuity"-ish assumptions: If A₁ gives the same answer as A₂ for many inputs, but for slightly perturbed inputs they give radically different outputs, I'd call those two algorithms very dissimilar.
I operate by Crocker's rules[1].