Large language models are simulators, and the different behaviors we see exhibited by ChatGPT and friends can be explained by how simulacra are instantiated and evolve within them.
The question of how intelligent simulacra like ChatGPT can become is not at all settled, and we shouldn't expect there to be fundamental limits. (But there may be practical ones.)
But the best part of having the simulators analogy handy, is that it prevents you from getting stuck in the contentless framing wherein LLMs are "just" text predictors.
Thanks for reading! Please share this with the AI-curious laypeople in your life, and send me any feedback (especially on how to make it more accessible to them).
The question of how intelligent simulacra like ChatGPT can become is not at all settled, and we shouldn't expect there to be fundamental limits. (But there may be practical ones.)