If you have advanced BCIs, while training an AI system, you might be able to use a distance metric between the human neural activations and the weights as an additional training signal

To inch closer to human-like cognition in trained AI systems

@niplav I wonder how much it would take to straight up simulate a human brain, and I think this closer to that idea.
The reason why I don't think that AI models are quite there yet (speaking as someone who has a basic understanding of them so I might be missing something) is that they don't grow the same way.
Our brains are exposed to a wide variety of data and spend ten years being very flexible, before gradually decreasing in that respect. AI models are basically a massively sped up version of locking a kid in the basement for their whole childhood and making them do exactly one thing all their waking hours until they get super good at it, except without the trauma that results.

So I'm wondering if any of that matters. How closely these models actually mimic the structure and functionality of an organic brain, vs simply mimicking it through cheaper means.

@Paradox yep, this very much moves towards brain emulation (from the top down or smth)

On WBE see this report which is forever on my reading list: fhi.ox.ac.uk/brain-emulation-r

Not sure about the disanalogy to humans. I've heard people claim that humans learn suprisingly similarly to current LLMs:
* vast amounts of self-supervised learning (prediction of text in LLMs and of sensory data in humans)
* some reinforcement learning on top (action-reaction in humans and RLHF in LLMs)

@Paradox maybe some self-criticism or self-reflection/chain-of-thought type stuff (constitutional AI in LLMs)

Follow

@Paradox this assumes that the type of data learned on doesn't *really* matter, if it's video or sensory or text or whatevs

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one