Good discussion about the limits (or not) of simulators like ChatGPT. https://www.lesswrong.com/posts/MmmPyJicaaJRk4Eg2/the-limit-of-language-models
(Warning: The post is structured strangely. It starts by arguing for unlimited simulation power, and then counterargues. Don't give up early. The comments also have good discussion.)
And another: whether text-predictors might take agent-like action to make future text easier to predict. You might think no, but consider the closely-related recommender systems, like the YouTube or Amazon algorithms...