@erwinrossen The years of mindless data collection are starting to bear fruit 😃
The Taoists were wrong:
Recent masturbation likely doesn't decrease meditation quality.
http://niplav.site/notes#Does_Recent_Masturbation_Decrease_Meditation_Quality
@cosmiccitizen Parmenides criminally undervalued
@chjara Also ⍼
@Paradox this assumes that the type of data learned on doesn't *really* matter, if it's video or sensory or text or whatevs
@Paradox maybe some self-criticism or self-reflection/chain-of-thought type stuff (constitutional AI in LLMs)
@Paradox yep, this very much moves towards brain emulation (from the top down or smth)
On WBE see this report which is forever on my reading list: https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf
Not sure about the disanalogy to humans. I've heard people claim that humans learn suprisingly similarly to current LLMs:
* vast amounts of self-supervised learning (prediction of text in LLMs and of sensory data in humans)
* some reinforcement learning on top (action-reaction in humans and RLHF in LLMs)
In 2017 there were ~100 mio. beehives in the world
(fao.org/faostat/en/#da…, parameters Live Animals, World + (Total), Beehives, Stocks, from https://forum.effectivealtruism.org/s/ptTzTsuDJEK8awdf6/p/JR4azSe4Yr2qdBBwt )
With ~1 trio. bees
I just gotta really appreciate our dear admin @pee_zombie. Server always running, no random bans because someone didn't like the right videogame, appreciable schizoposts… just all around great.
I operate by Crocker's rules[1].