@niplav I dunno what shard theory is, but I agree with this notion.
@Paradox If you're interested: https://www.lesswrong.com/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values
@Paradox They claim that human learning is a lot like current AI training: A lot of self-supervised pre-training+some fine-tuning+a little bit of RL (and in this view then multi-agent RL on top)
@niplav I would also agree with this. I think humans are just flesh AI.
@niplav We seem to have complicated values because we have competing values. A long list of stuff we care about, at varying priorities, and these change depending on the situation, ie mood, environment, what's on our mind, etc. Things feel complicated when we're far from understanding every influencing factor. Pretty sure even AI kinda operates on this concept. It's not quite a blackbox, but it's rather dark. We know the framework, but not the details (they use millions of parameters, no shot we do).
Of course, arbitrary rules can also make stuff seem complication, even if you know them all, but our brain are pretty straightforward. We draw conceptual associations between a bunch of stuff, even if we all do it slightly differently, and there's always a pattern to them.
@Paradox not sure I understand
@Paradox drives seem really importamt, as do desires built on abstractions of those drives
@Paradox adn drives are fulfilled at some point and then you gotta go to satisfy another one, running frpm drive to drive