It stops.
Because the intelligence is all inside the matrices and is just as opaque to the AI as our own brains are to us.
LLMs are basically big matrices, right?
What if we get a medium-smart AI, give it access to its own code, and ask it to improve itself, and it catches a case where large matrices can be multiplied faster with a clever algorithm, making it faster, and then...
Non-anime watchers: why not start now?
This is a heartwarming movie about/for kids. There are dubbed showings.
@niplav I'd guess: the more of what they're doing is conversational, the more it gets absorbed into status games. The more it is physical, the less.
(In this model, publishing academic papers is conversational.)
If you are building houses or doing work in a lab there's less space to posture. If you're just talking, there's more.
@cosmiccitizen Hatred is bad for you.
But then, I can join you in the Bodhisattva except for my one enemy club.
@niplav this is a really good question that I can't even begin to conceptualize how to estimate the answer.
How much counterfactually available outcome-value is left on the table by Hansonian instincts?
I.e. you have a community that tries to achieve X, but they don't achieve X as well as they could because of social status drives. How much better could they achieve X if they didn't have those drives (at same level of intelligence)
@schweeds lol, yeah.
Worse than that though, if the LLM's behavior is generalized from examples, LessWrong is a hotbed of bad examples you wouldn't want your AI to learn from.
@eniko I have not had a headache, but I've had long and deep sleep the past few days. Probably making up for sleep debt, but starting to drift off in the early evening is throwing me for a loop.