This article made me feel better about my relatively unorganized digital life and knowledge graph. The constant struggle of organizing information, versus actually using it to accomplish goals. https://borretti.me/article/unbundling-tools-for-thought
And another: whether text-predictors might take agent-like action to make future text easier to predict. You might think no, but consider the closely-related recommender systems, like the YouTube or Amazon algorithms...
Comment discussions: whether the structure of the transformer architecture is able to, in principle, carry out complex enough computations to simulate conscious minds.
Good discussion about the limits (or not) of simulators like ChatGPT. https://www.lesswrong.com/posts/MmmPyJicaaJRk4Eg2/the-limit-of-language-models
(Warning: The post is structured strangely. It starts by arguing for unlimited simulation power, and then counterargues. Don't give up early. The comments also have good discussion.)
That is: George gave Kathy his heart. The very next day Kathy gave George's heart away to Andrew. How does that … work?
(Figurative) hearts are not traditionally a transferrable asset. Kathy could give *Kathy's* heart away, but not George's.
I think about this a lot. Perhaps too much. https://web.archive.org/web/20171216072414/http://squid314.livejournal.com/332946.html
For a White Elephant or Secret Santa gift exchange with a limit of (say) ¥4,000, is 4000 × 1¥ coins the best gift or the worst gift? 🤔
Before you answer, remember https://www.pbs.org/newshour/economy/the-economics-of-wasteful-spending-the-dead-weight-loss-of-christmas