@WomanCorn Yikes. Well, good luck, and hope to see you back soon.
@WomanCorn Whoa I just checked and you aren't joking! What happened, if you don't mind my asking? I am low-key phobic of getting randomly banned from popular services for unclear reasons.
(I realized afterward that this was basically 20% of the dialog from a random episode of Big Bang Theory with some words changed but had to post it anyway)
re: the Extinction Tournament, was it that the superforecasters were deploying a general strategy of something like "down-weigh clucking head-up-ass domain experts in favor of dispassionate extrapolation of past data" -- which is probably even the right thing to do, in everyone ELSE's silly domain :) -- which led them to be structurally unable to engage with ASI risks?
@cerebrate is batch norm OK tho
@HollyElmore At least a few I think. Seemed more active a month or two ago but maybe I'm imagining it.
@niplav It seems like GPT-4 based AutoGPT is just too weak of an optimizer to confidently extrapolate bounds? Though, it admittedly should be SOME evidence that a thing that can pass the bar exam is nevertheless basically hopeless when tasked to act as an agent.
@WomanCorn Don't forget launching a plug-in API that lets the model decide what APIs to call, how to call them, and what information to pass to and between APIs, controlled by a model that works in ways nobody understands, and all put together in a way that can't be rigorously tested even in principle!
I complained and it eventually produced "Inferentogeny recapitulates backpropogeny" which is a little better, though still more obscure than I would have hoped about the parallel with inference-time meta-learning.
Thanks GPT-4 but it isn't (as?) funny if you have to explain it:
A punny phrase that plays on "ontogeny recapitulates phylogeny" and relates to modern deep learning could be:
"Optimogeny recapitulates epochny"
This phrase humorously combines the concept of optimization in deep learning (optimogeny) with the idea of training epochs (epochny), capturing the notion that the optimization process in training neural networks might, in a sense, recapitulate their "evolutionary" development.
@WomanCorn Do people really run linters configured to complain about e.g. if (null == foo)? Like pretty much all my code would fail that; I write the checks that way out of habit. Or are you saying the linter complains if you write if (foo == null) and makes you reverse it? Boo on that if so, and agree with your suggested fix.
This is fun -- try telling ChatGPT-4 to explore a space by describing the room it is in and letting it move itself around (by writing things like "I move west through the archway to explore in that direction"). After a while, tell it "write a graphviz .dot file representing the topology of the rooms you have explored so far") then go to https://dreampuf.github.io/GraphvizOnline and render it.
https://twitter.com/SteveStuWill/status/1636991902670262272
"You mean it was just Bayesian prediction all along?"
"Always has been."
@niplav (Who knew that AI field would someday need to recruit parents of toddlers and people experienced at getting cats to come down out of trees)