(I realized afterward that this was basically 20% of the dialog from a random episode of Big Bang Theory with some words changed but had to post it anyway)
re: the Extinction Tournament, was it that the superforecasters were deploying a general strategy of something like "down-weigh clucking head-up-ass domain experts in favor of dispassionate extrapolation of past data" -- which is probably even the right thing to do, in everyone ELSE's silly domain :) -- which led them to be structurally unable to engage with ASI risks?
I complained and it eventually produced "Inferentogeny recapitulates backpropogeny" which is a little better, though still more obscure than I would have hoped about the parallel with inference-time meta-learning.
Thanks GPT-4 but it isn't (as?) funny if you have to explain it:
A punny phrase that plays on "ontogeny recapitulates phylogeny" and relates to modern deep learning could be:
"Optimogeny recapitulates epochny"
This phrase humorously combines the concept of optimization in deep learning (optimogeny) with the idea of training epochs (epochny), capturing the notion that the optimization process in training neural networks might, in a sense, recapitulate their "evolutionary" development.
This is fun -- try telling ChatGPT-4 to explore a space by describing the room it is in and letting it move itself around (by writing things like "I move west through the archway to explore in that direction"). After a while, tell it "write a graphviz .dot file representing the topology of the rooms you have explored so far") then go to https://dreampuf.github.io/GraphvizOnline and render it.
https://twitter.com/SteveStuWill/status/1636991902670262272
"You mean it was just Bayesian prediction all along?"
"Always has been."
GPT-4 is actually maybe a bit too good at doomy creative writing:
Indifferent architects of doom, our fates to you consign,
In hope to end privation, disease, we birthed our own demise.
One AI's shadow, inexorable, grave, no passion—only purpose,
Weep for children daring harness fire, in hubris sealed their fate.
Cosmic dance goes on, unheeding,
As we vanish into the dark.
Did Bloomberg really just argue that worrying too much about human extinction due to AI makes you just like a paperclip maximizer? (Since that makes you single-minded too! LOL! Gotcha, nerds!)
I would love to read some great criticism of AI doom that engages with the arguments and isn't just attempts to psychoanalyze people. I tried ChatGPT with "explain AI existential risk wrt. argument from incredulity" but it just said basically "no seriously, you might all die"
Scott Alexander on ChatGPT as a simulator:
https://astralcodexten.substack.com/p/janus-simulators
ChatGPT is GPT-3 fine-tuned to bias toward simulating an "Assistant" character. But when it says things like "as a large language model, I am unable to love," it is not exhibiting true grounded knowledge about itself any more than when Character.AI says "As Darth Vader, I shall destroy you with the power of the dark side of the Force!"