A lot of AI discussion presumes that an AI can just get answers to hard questions at will.
Has anyone asked ChatGPT what's in the nuclear notebook?
@WomanCorn I'm pretty sure that there is a prompt of <500 chars for which ChatGPT will tell you what's in the nuclear codebook.
It knows, but it won't tell you
@WomanCorn Actually I was confused, this isn't true
@WomanCorn I hope
@WomanCorn I think the crux here is whether you believe human intelligence is close to the optimum *and* whether it is hard to get to this optimum
(plus whether "predict-the-next-token" is sufficient for reinforcement learning (remind me to read the decision transformer paper at some point))
a Schelling point for those who seek one
@WomanCorn
I think the crux here is whether you believe human intelligence is close to the optimum *and* whether it is hard to get to this optimum
(plus whether "predict-the-next-token" is sufficient for reinforcement learning (remind me to read the decision transformer paper at some point))