@niplav I dunno what shard theory is, but I agree with this notion.
@Paradox If you're interested: https://www.lesswrong.com/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values
@Paradox They claim that human learning is a lot like current AI training: A lot of self-supervised pre-training+some fine-tuning+a little bit of RL (and in this view then multi-agent RL on top)
@niplav I would also agree with this. I think humans are just flesh AI.
@niplav We seem to have complicated values because we have competing values. A long list of stuff we care about, at varying priorities, and these change depending on the situation, ie mood, environment, what's on our mind, etc. Things feel complicated when we're far from understanding every influencing factor. Pretty sure even AI kinda operates on this concept. It's not quite a blackbox, but it's rather dark. We know the framework, but not the details (they use millions of parameters, no shot we do).
Of course, arbitrary rules can also make stuff seem complication, even if you know them all, but our brain are pretty straightforward. We draw conceptual associations between a bunch of stuff, even if we all do it slightly differently, and there's always a pattern to them.
@Paradox not sure I understand
@Paradox yeah, thattracks with my model
@niplav So take this example. You like cheese, cars, and comedy.
Are you hungry? Do you like jokes about cheese? Cars with cheese patterns? Do you prefer stand up or tv shows? Those three values can be permutated. Now let's say you hate the color blue. Do you like blue cheese? Is your hatred of blue or your love for cheese more important?
Also drives usually aren't sated permanently. You fulfill it, you're fine for a while, then you get hungry for it again. Some specific goals and broad goals do get satisfied, like writing a certain poem or getting the job you want, but others like engaging in a hobby or spending time with a friend don't work like that.