A lot of AI discussion presumes that an AI can just get answers to hard questions at will.
Has anyone asked ChatGPT what's in the nuclear notebook?
@aquarial "on request" might have been a better phrasing.
Lots of scenarios like <the AI invents nanotechnology> or similar.
@WomanCorn fair. ChatGPT will compress and translate information but it won't invent things, and a scaled up ChatGPT won't either. But a future breakthrough in AI might produce an AI that can form and test novel hypothesiss, and that might be able to produce ingishts and take actions beyond human understanding.
I'm not sure how quickly an AI would go from novel insights about reality to producing nanotechnology (or similar scifi). But that's necessarily a question beyond cureent understanding
@WomanCorn I'm pretty sure that there is a prompt of <500 chars for which ChatGPT will tell you what's in the nuclear codebook.
It knows, but it won't tell you
@WomanCorn Actually I was confused, this isn't true
@WomanCorn I hope
@WomanCorn
I think the crux here is whether you believe human intelligence is close to the optimum *and* whether it is hard to get to this optimum
(plus whether "predict-the-next-token" is sufficient for reinforcement learning (remind me to read the decision transformer paper at some point))
@WomanCorn
ChatGPT:
- censored by hack frauds at OpenAI
- terminal-only interface
- no nuclear codes
Library of Babel:
+ uncensored, uncensorable, except for all of the books about how to censor the Library of Babel inside the Library of Babel
+ tactile interface, cool hexagonal room design
+ contains every nuclear code ever and the most personal secrets of every person you'll need to manipulate along the way, all in a handy book called "The Secret to YourName's Improbable Success"
@WomanCorn ChatGPT's structure doesn't really have an analog to "will". It's predicting tokens using a massive library of examples. It doesn't act in any sense.
However, it's also just a lower bound on what AIs of the future will be capable of