Show newer

Does OpenAI still publish interpretability research?

a tensor is just an element in the tensor product of vector spaces and their duals, what's the problem?

(Non-joke question) Is there a tensor product of tensor products

Strong kudos to Jaron Lanier for forseeing the current discourse on generative model. My respect has increased.

Calibration of the intellect, optimism of the will

Wait, do neural networks implement a sensible prior?

(like the speed or simplicity prior?)

If yes, which one?

Is there another AI paradigm that could result in AGI[1] other than neural networks[2]?

[1]: Please don't ask me to pin down that term.
[2]: Assuming the scaling hypothesis. "Neural network"="Stacked matrix multiplication with non-linearities thrown in".

Wikipedia article “List of Largest Snakes”:

»There are eleven living snakes«

Does regularization of RL policies act as an impact measure?

Which way modern man

Show thread

Which way modern man

Man Google is just not very helpful anymore. I remember a story on LW about singularity in the 1980s from an EURISKO-like system, but Google just doesn't have a clue what I want to find

niplav boosted

> one newly synthesized heuristic kept rising in Worth, and finally I looked at it. It was doing no real work at all, but just before the credit/blame assignment phase, it quickly cycled through all the new concepts, and when it found one with high Worth it put its own name down as one of the creditors. Nothing is "wrong" with that policy, except that in the long ran it fails to lead to better results.

Does this already count as an inner optimizer?

(from “The Nature of Heuristics” p. 34)

which one

Still at ~60% of no humans[1] being alive by the end of the century

[1]: Wide concept boundary

"Turn the GPUs into rubik's cubes" is a way better framing than "melt the GPUs"

Show older
Mastodon

a Schelling point for those who seek one