I'll take a second to appreciate that the SBF thing is really bad
@sophon yea
@niplav it’s not good, but tbh I doubt it affects our chances much
@sophon I will think about that
@niplav go ahead. I will not elaborate further on this tho
@sophon Okay so my thought here is that alignment is elastic enough a problem that more money=better, academia doesn't understand the problem (think CIRL type solutions), EA does so much more, and that ⇒ more money is better & losing the money reduces prob of success a bunch
i think our crux here is elasticity
@niplav sorry for dumb question: what does CIRL stand for? I assume the IRL is “inverse reinforcement learning”?
@sophon cooperative inverse reinforcement learning
https://proceedings.neurips.cc/paper/2016/file/c3395dd46c34fa7fd8d729d8cf88b7a8-Paper.pdf
@niplav oh stuart russell’s thing
@niplav sorry, I do not pay much attention to outer alignment research
@niplav oh one other question: has this actually affected the ftx foundation’s behavior yet?
@sophon
They won't be able to make any more grants, apparently [1]
[1]: https://forum.effectivealtruism.org/posts/xafpj3on76uRDoBja/the-ftx-future-fund-team-has-resigned-1
@niplav does this apply to the worldview competition thing too?
@sophon yeah quite likely
@niplav shit
@sophon sorry :-/
@niplav I mean obviously it’s not your fault…
@niplav you mean ftx losing money?