Follow

If I were a Shard Theory person, I'd say that constitutional AI is a next step in training AIs in the similar way that humans are trained: Reinforcement learning from interacting with other agents, starting with a simple set of values

@niplav I dunno what shard theory is, but I agree with this notion.

@Paradox They claim that human learning is a lot like current AI training: A lot of self-supervised pre-training+some fine-tuning+a little bit of RL (and in this view then multi-agent RL on top)

@niplav I would also agree with this. I think humans are just flesh AI.

@niplav We seem to have complicated values because we have competing values. A long list of stuff we care about, at varying priorities, and these change depending on the situation, ie mood, environment, what's on our mind, etc. Things feel complicated when we're far from understanding every influencing factor. Pretty sure even AI kinda operates on this concept. It's not quite a blackbox, but it's rather dark. We know the framework, but not the details (they use millions of parameters, no shot we do).
Of course, arbitrary rules can also make stuff seem complication, even if you know them all, but our brain are pretty straightforward. We draw conceptual associations between a bunch of stuff, even if we all do it slightly differently, and there's always a pattern to them.

@Paradox drives seem really importamt, as do desires built on abstractions of those drives

@Paradox adn drives are fulfilled at some point and then you gotta go to satisfy another one, running frpm drive to drive

@niplav So take this example. You like cheese, cars, and comedy.
Are you hungry? Do you like jokes about cheese? Cars with cheese patterns? Do you prefer stand up or tv shows? Those three values can be permutated. Now let's say you hate the color blue. Do you like blue cheese? Is your hatred of blue or your love for cheese more important?
Also drives usually aren't sated permanently. You fulfill it, you're fine for a while, then you get hungry for it again. Some specific goals and broad goals do get satisfied, like writing a certain poem or getting the job you want, but others like engaging in a hobby or spending time with a friend don't work like that.

@niplav @Paradox on my view, there are "base drives" and "verbal values". the former are selected for producing effective behaviour, and the latter are selected for producing effective words. (somewhat tracking near/far mode of human behaviour.)

and since humans hv the ability to do hypocrisy (aka value-action gap, rationalisation, memetic-immune-system), it enables our verbal values to evolve independently of what makes effective behaviour. this is crucial, and (i think) extremely lucky, bc no brain cud possibly evolve cosmopolitan values if it had to actually implement it in its behaviour.

"effective altruism" is the v rare mutation where a brain starts to break down its own rationalisation/hypocrisy-barriers, and instead of then becoming consistently selfish, it generalises the other way, such that verbal values start to influence actual behaviour. humans can do this bc we are v prone to overgeneralising our learned proxies.

alas, i think it's highly unlikely that a given learning-regime will make the AI 1) evolve proxy-values optimised for seeming nice to others upon ~direct inspection, and 2) overgeneralise those proxy-values to actual behaviour, unless somehow carefwly designed that way. (this isn't a suggestion; i'm just talking abt the ontogeny of human values).

@rime staring at ontological crises for a while makes me believe this too

More parsimonious ai-values might be pretty weird to humans as an axis, just as simplocity priors are strange

@rime love this explanation! Explains some tension: if some parts generalize twd altruism and others twd selfishness you have to find the equilibrium

@rime wouldn't go as far as Ngo to say all of alignment risk comes from here but seems like a rather large source

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one