Deus Ex and Metal Gear were like, the AI will arise from state surveillence systems and will be better at filtering out misinformatiom than humans are.

And now that I think of it, maybe that kind of AI would've been nice.

But that's not the AI we got. No, the AI we got is good at creating misinformation.

What if we trained an AI to predict the news based on prior news? It'd probably still say bullshit, but...

Thinking about more the idea of "algorithm for truth":

In order to preserve human freedom and creativity it's critical that no AI trying to tell apart truth from misinformation is allowed to moderate our communication channels, and that no such AI is assumed to be always right.

But that doesn't mean AIs shouldn't try to be right about truth of things. And ChatGPT at all don't seem to be even trying.

Follow

@wolf480pl yep, it's simulating whatever prompt+RLHF-induced mode collapse let it simulate

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one