Show newer

Calibration of the intellect, optimism of the will

Wait, do neural networks implement a sensible prior?

(like the speed or simplicity prior?)

If yes, which one?

Is there another AI paradigm that could result in AGI[1] other than neural networks[2]?

[1]: Please don't ask me to pin down that term.
[2]: Assuming the scaling hypothesis. "Neural network"="Stacked matrix multiplication with non-linearities thrown in".

Wikipedia article “List of Largest Snakes”:

»There are eleven living snakes«

Does regularization of RL policies act as an impact measure?

Which way modern man

Show thread

Which way modern man

Man Google is just not very helpful anymore. I remember a story on LW about singularity in the 1980s from an EURISKO-like system, but Google just doesn't have a clue what I want to find

niplav boosted

> one newly synthesized heuristic kept rising in Worth, and finally I looked at it. It was doing no real work at all, but just before the credit/blame assignment phase, it quickly cycled through all the new concepts, and when it found one with high Worth it put its own name down as one of the creditors. Nothing is "wrong" with that policy, except that in the long ran it fails to lead to better results.

Does this already count as an inner optimizer?

(from “The Nature of Heuristics” p. 34)

which one

Still at ~60% of no humans[1] being alive by the end of the century

[1]: Wide concept boundary

"Turn the GPUs into rubik's cubes" is a way better framing than "melt the GPUs"

Though this has problems with comparing across different computing paradigms (what's the trace of a λ-calculus expression to the one of a Turing machine computation?)

Show thread

This is maybe downstream from taking the functional and not algorithmic view on similarity: Wouldn't we want to *also* examine the traces we get?

Show thread

A functional definition of algorithm similarity (number of same outputs on same inputs) disregards some "continuity"-ish assumptions: If A₁ gives the same answer as A₂ for many inputs, but for slightly perturbed inputs they give radically different outputs, I'd call those two algorithms very dissimilar.

comparing politics to constrained optimization: taxes are equivalent to the penalty method, while regulations are equivalent to barrier functions.

I want there to be a norm that it is *good* for people to first state them noticing their status-guided motivations and then proceding to answer on the object level when askedabout things.

Show older
Mastodon

a Schelling point for those who seek one