What if AI recursive self-improvement only gets to turn the screw one time?

LLMs are basically big matrices, right?

What if we get a medium-smart AI, give it access to its own code, and ask it to improve itself, and it catches a case where large matrices can be multiplied faster with a clever algorithm, making it faster, and then...

Follow

It stops.

Because the intelligence is all inside the matrices and is just as opaque to the AI as our own brains are to us.

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one