Good discussion about the limits (or not) of simulators like ChatGPT. lesswrong.com/posts/MmmPyJicaa

(Warning: The post is structured strangely. It starts by arguing for unlimited simulation power, and then counterargues. Don't give up early. The comments also have good discussion.)

Follow

Comment discussions: whether the structure of the transformer architecture is able to, in principle, carry out complex enough computations to simulate conscious minds.

And another: whether text-predictors might take agent-like action to make future text easier to predict. You might think no, but consider the closely-related recommender systems, like the YouTube or Amazon algorithms...

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one