It’s important to distinguish between a kind of classical utilitarianism which assumes some sort of objective moral utility function which determines what is right, and the utilitarianism which just says that one way to model moral action is to define a utility function over a set of things you care about such that the higher the function the better, and ask how actions impact that function.
I believe the second is true and useful, unlike the first.
Moreover, during the training or evolution of a superintelligence, Omohundro drives would likely not only emerge but become intrinsically valued (à la mesa-optimizers), and override the original goal.
Notice that in nature, every terminal goal has always come about as a proxy for an Omohundro drive.
I just understood the argument [against the orthogonality hypothesis](https://web.archive.org/web/20200701082447/https://www.xenosystems.net/against-orthogonality/). I’m not completely sold, but I’m interested.
It’s not entirely wrong; a superintelligent paperclip maximizer could exist. But terminal goals are not in practice independent from intelligence, because an agent pursuing Omohundro drives for their own sake may be able to self-improve more efficiently than a paperclip maximizer.
Eventually AI could have more qualitative consciousness than humans. Human self-awareness is limited - there are lots of parts of our minds (most) people have no idea how to pay attention to intentionally, let alone be aware of continuously. An AI with agency over this aspect of its self could easily expand its awareness past what humans are capable of.
The negative reaction to lots of non-artists getting the chance to make something they find aesthetically nice and imitating a style they admire is insane.
No person who types a few words into ChatGPT is going to start thinking of themself as better than Miyazaki.
I can understand criticizing the way OpenAI is profiting from this, but please don't call regular people trying out a new tool barbaric and dystopian.