Wondering how much and what kind of media and policy attention would lead this to happen.
Seems unlikely the government would end up funding enough/all of the promising research that EA would feel its money is better spent elsewhere, right?
---
RT @ohwizenedtortle
What would the world need to look like for EA to stop funding alignment research?
https://twitter.com/ohwizenedtortle/status/1611497563810824192
@tortle
1. Solving easy problem of corrigibility (while still adhering to vNM axioms)
2. Interpretability to the level that we can extract a hand-coded algorithm from AlphaFold 2, and similar feats (maybe in total 100 bio. invested in interpretability or sth?)
3. Make GPT-4 never say *anything* violent
4. Formula for embedded diamond maximizer
Would at least make me think "ok, I should probably focus on other things"