@Paradox yep, this very much moves towards brain emulation (from the top down or smth)
On WBE see this report which is forever on my reading list: https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf
Not sure about the disanalogy to humans. I've heard people claim that humans learn suprisingly similarly to current LLMs:
* vast amounts of self-supervised learning (prediction of text in LLMs and of sensory data in humans)
* some reinforcement learning on top (action-reaction in humans and RLHF in LLMs)
@Paradox this assumes that the type of data learned on doesn't *really* matter, if it's video or sensory or text or whatevs
@Paradox maybe some self-criticism or self-reflection/chain-of-thought type stuff (constitutional AI in LLMs)