@k4r1m an Intuition Pump is a thought experiment that helps you to understand similar situations.
So, the likelihood of the sun supernovaing is tiny, but if that happens it means I was seriously wrong about how the universe works.
I also think LLMs will not become AGIs. If one does, it means my model of how intelligence works is seriously wrong.
It would be nice if I could discover that in a non-catastropic way.
@WomanCorn thanks!
On the LLM/AGI issue, my take is that regardless of my model of intelligence, I am fairly confident that the models of intelligence of the LLM/AGI people are heavily flawed. And secondly, I am not aware of other instances where complicated things were built accidentally, as it’s claimed here…
@WomanCorn I am not familiar with the term “Intuition pump” - Did my answer go in the right direction for you? I am assuming yes (if the weights are on the model, then a model with better validation can be better trusted?)