@ErikUden @chjara
Good childhood https://worm.fandom.com/wiki/String_Theory
@gigabecquerel Dangerous to the world or to your wallet?
@erwinrossen The years of mindless data collection are starting to bear fruit 😃
The Taoists were wrong:
Recent masturbation likely doesn't decrease meditation quality.
http://niplav.site/notes#Does_Recent_Masturbation_Decrease_Meditation_Quality
@cosmiccitizen Parmenides criminally undervalued
@chjara Also ⍼
@Paradox this assumes that the type of data learned on doesn't *really* matter, if it's video or sensory or text or whatevs
@Paradox maybe some self-criticism or self-reflection/chain-of-thought type stuff (constitutional AI in LLMs)
@Paradox yep, this very much moves towards brain emulation (from the top down or smth)
On WBE see this report which is forever on my reading list: https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf
Not sure about the disanalogy to humans. I've heard people claim that humans learn suprisingly similarly to current LLMs:
* vast amounts of self-supervised learning (prediction of text in LLMs and of sensory data in humans)
* some reinforcement learning on top (action-reaction in humans and RLHF in LLMs)
I operate by Crocker's rules[1].