> one newly synthesized heuristic kept rising in Worth, and finally I looked at it. It was doing no real work at all, but just before the credit/blame assignment phase, it quickly cycled through all the new concepts, and when it found one with high Worth it put its own name down as one of the creditors. Nothing is "wrong" with that policy, except that in the long ran it fails to lead to better results.
Does this already count as an inner optimizer?
(from “The Nature of Heuristics” p. 34)
@alexthecamel good, we don't need any more MOPs
@nyx kill --genickschuss ${ps aux | grep rice | awk '{ print($5) }'}
@panchromaticity Wait, hangon. I never asked whether this is actually true
@danhon Yeah, that's no the issue 🙂
Basically a thread in certain parts of the fediverse[1] with a lot of tagged people where everyone is just insulting each other.
[1]: Right wing/kiwifarms parts, mostly
@Kurt horrible/10 with rice
@danhon Hellthreads are a phenomenon I've only seen on the fediverse.
Though this has problems with comparing across different computing paradigms (what's the trace of a λ-calculus expression to the one of a Turing machine computation?)
This is maybe downstream from taking the functional and not algorithmic view on similarity: Wouldn't we want to *also* examine the traces we get?
A functional definition of algorithm similarity (number of same outputs on same inputs) disregards some "continuity"-ish assumptions: If A₁ gives the same answer as A₂ for many inputs, but for slightly perturbed inputs they give radically different outputs, I'd call those two algorithms very dissimilar.
I operate by Crocker's rules[1].