ChatGPT Is Not a Blurry JPEG of the Web. It's a Simulacrum. blog.domenic.me/chatgpt-simula

In which I try to provide a more accurate analogy for large language models, by summarizing @repligate's simulators thesis.

Large language models are simulators, and the different behaviors we see exhibited by ChatGPT and friends can be explained by how simulacra are instantiated and evolve within them.

The question of how intelligent simulacra like ChatGPT can become is not at all settled, and we shouldn't expect there to be fundamental limits. (But there may be practical ones.)

Follow

But the best part of having the simulators analogy handy, is that it prevents you from getting stuck in the contentless framing wherein LLMs are "just" text predictors.

Thanks for reading! Please share this with the AI-curious laypeople in your life, and send me any feedback (especially on how to make it more accessible to them).

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one