@WomanCorn Actually I was confused, this isn't true
@WomanCorn
I think the crux here is whether you believe human intelligence is close to the optimum *and* whether it is hard to get to this optimum
(plus whether "predict-the-next-token" is sufficient for reinforcement learning (remind me to read the decision transformer paper at some point))
@WomanCorn I hope