I then asked a follow-up question about how I could made this work while also grouping the results. It (of course) knew what to do there too. But check out how the conversation ended! I thought I was just doing a polite "thank you", but I got a bonus lesson!
But the best part of having the simulators analogy handy, is that it prevents you from getting stuck in the contentless framing wherein LLMs are "just" text predictors.
The question of how intelligent simulacra like ChatGPT can become is not at all settled, and we shouldn't expect there to be fundamental limits. (But there may be practical ones.)
Large language models are simulators, and the different behaviors we see exhibited by ChatGPT and friends can be explained by how simulacra are instantiated and evolve within them.
This is some real Susan Calvin robot psychologist shit https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldmagikarp-plus-prompt-generation
https://heistak.github.io/your-code-displays-japanese-wrong/for background on this problem. The supplement of which contains this fascinating info on "discretionary ligatures"...
And another: whether text-predictors might take agent-like action to make future text easier to predict. You might think no, but consider the closely-related recommender systems, like the YouTube or Amazon algorithms...
Comment discussions: whether the structure of the transformer architecture is able to, in principle, carry out complex enough computations to simulate conscious minds.