It stops.
Because the intelligence is all inside the matrices and is just as opaque to the AI as our own brains are to us.
LLMs are basically big matrices, right?
What if we get a medium-smart AI, give it access to its own code, and ask it to improve itself, and it catches a case where large matrices can be multiplied faster with a clever algorithm, making it faster, and then...
Non-anime watchers: why not start now?
This is a heartwarming movie about/for kids. There are dubbed showings.
How much counterfactually available outcome-value is left on the table by Hansonian instincts?
I.e. you have a community that tries to achieve X, but they don't achieve X as well as they could because of social status drives. How much better could they achieve X if they didn't have those drives (at same level of intelligence)
Update. Turns out that John #Deere has been using open code under the #GPL w/o living up to the license. The Software Freedom Conservancy (@conservancy) is calling on it to comply — which would greatly enhance #farmers' #RightToRepair.
https://sfconservancy.org/blog/2023/mar/16/john-deere-gpl-violations/
"We…publicly call on John Deere to immediately resolve all of its outstanding GPL violations…by providing complete source code…that the GPL & other copyleft licenses require, to the farmers & others who are entitled to it."