@niplav I wonder how much it would take to straight up simulate a human brain, and I think this closer to that idea.
The reason why I don't think that AI models are quite there yet (speaking as someone who has a basic understanding of them so I might be missing something) is that they don't grow the same way.
Our brains are exposed to a wide variety of data and spend ten years being very flexible, before gradually decreasing in that respect. AI models are basically a massively sped up version of locking a kid in the basement for their whole childhood and making them do exactly one thing all their waking hours until they get super good at it, except without the trauma that results.
So I'm wondering if any of that matters. How closely these models actually mimic the structure and functionality of an organic brain, vs simply mimicking it through cheaper means.
@Paradox yep, this very much moves towards brain emulation (from the top down or smth)
On WBE see this report which is forever on my reading list: https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf
Not sure about the disanalogy to humans. I've heard people claim that humans learn suprisingly similarly to current LLMs:
* vast amounts of self-supervised learning (prediction of text in LLMs and of sensory data in humans)
* some reinforcement learning on top (action-reaction in humans and RLHF in LLMs)