LLMs are basically big matrices, right?
What if we get a medium-smart AI, give it access to its own code, and ask it to improve itself, and it catches a case where large matrices can be multiplied faster with a clever algorithm, making it faster, and then...
It stops.
Because the intelligence is all inside the matrices and is just as opaque to the AI as our own brains are to us.