I think it's pretty clear that OpenAI is completely incompetent at safety, for any kind of safety you wish to use.

- Ordinary cybersecurity breaches
- Can't keep the AI from becoming waluigi
- Not a paperclip maximizer, only because it's a chatbot with no goals

Follow

@WomanCorn Don't forget launching a plug-in API that lets the model decide what APIs to call, how to call them, and what information to pass to and between APIs, controlled by a model that works in ways nobody understands, and all put together in a way that can't be rigorously tested even in principle!

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one