I'll take a second to appreciate that the SBF thing is really bad

@niplav it’s not good, but tbh I doubt it affects our chances much

@niplav go ahead. I will not elaborate further on this tho

@sophon Okay so my thought here is that alignment is elastic enough a problem that more money=better, academia doesn't understand the problem (think CIRL type solutions), EA does so much more, and that ⇒ more money is better & losing the money reduces prob of success a bunch

i think our crux here is elasticity

@niplav sorry for dumb question: what does CIRL stand for? I assume the IRL is “inverse reinforcement learning”?

@niplav sorry, I do not pay much attention to outer alignment research :yui_shrug:

@niplav oh one other question: has this actually affected the ftx foundation’s behavior yet?

@niplav does this apply to the worldview competition thing too?

@niplav I mean obviously it’s not your fault…

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one