1. OpenAI's "release early" plan is not insane
2. LLMs will become agents because it's profitable and humans will deliberately make it into one
3. "The public" seems pretty receptive to the idea of AI risk
4. Doom is more likely to be humans plain instructing the AI to recursively self-improve, or plain destroy the world
5. Once some sentient AIs are released, humans are likely to try to torture them, and we need to set up guardrails asap.
https://www.lesswrong.com/posts/3DyXQkkkGnSgy95ex/briefly-how-i-ve-updated-since-chatgpt