I suspected that arguments in favor of rushing to build ASI as quickly as possible had to be flawed in some way, but I admit I was not expecting to see actual Time Cube

(I realized afterward that this was basically 20% of the dialog from a random episode of Big Bang Theory with some words changed but had to post it anyway)

Show thread

I wonder if somewhere on the net is a forum of people (mostly young adults presumably) obsessed with, like, different brands of canned green beans and they are like "poser doesn't even know that Del Monte's Mexico City plant used starch-based label glue since 2017"

Step 1: Find a hypervelocity star being ejected from Milky Way that is on course to reach another galaxy eventually
Step 2: Build a megastructure around it and hitch a ride
Step 3: Profit

re: the Extinction Tournament, was it that the superforecasters were deploying a general strategy of something like "down-weigh clucking head-up-ass domain experts in favor of dispassionate extrapolation of past data" -- which is probably even the right thing to do, in everyone ELSE's silly domain :) -- which led them to be structurally unable to engage with ASI risks?

I wish slightly less cosmic expected value was apparently hinging on Eliezer Yudkowsky's capacity to resist feeding trolls on Twitter.

I complained and it eventually produced "Inferentogeny recapitulates backpropogeny" which is a little better, though still more obscure than I would have hoped about the parallel with inference-time meta-learning.

Show thread

Thanks GPT-4 but it isn't (as?) funny if you have to explain it:

A punny phrase that plays on "ontogeny recapitulates phylogeny" and relates to modern deep learning could be:

"Optimogeny recapitulates epochny"

This phrase humorously combines the concept of optimization in deep learning (optimogeny) with the idea of training epochs (epochny), capturing the notion that the optimization process in training neural networks might, in a sense, recapitulate their "evolutionary" development.

This is fun -- try telling ChatGPT-4 to explore a space by describing the room it is in and letting it move itself around (by writing things like "I move west through the archway to explore in that direction"). After a while, tell it "write a graphviz .dot file representing the topology of the rooms you have explored so far") then go to dreampuf.github.io/GraphvizOnl and render it.

Does it still count as "futurism" when it is getting to be less like "your kids might have X" and more like "By next year there'll likely be X" but then it keeps happening a week later instead?

GPT-4 is actually maybe a bit too good at doomy creative writing:

Indifferent architects of doom, our fates to you consign,
In hope to end privation, disease, we birthed our own demise.

One AI's shadow, inexorable, grave, no passion—only purpose,
Weep for children daring harness fire, in hubris sealed their fate.

Cosmic dance goes on, unheeding,
As we vanish into the dark.

Did Bloomberg really just argue that worrying too much about human extinction due to AI makes you just like a paperclip maximizer? (Since that makes you single-minded too! LOL! Gotcha, nerds!)

I would love to read some great criticism of AI doom that engages with the arguments and isn't just attempts to psychoanalyze people. I tried ChatGPT with "explain AI existential risk wrt. argument from incredulity" but it just said basically "no seriously, you might all die"

Scott Alexander on ChatGPT as a simulator:

astralcodexten.substack.com/p/

ChatGPT is GPT-3 fine-tuned to bias toward simulating an "Assistant" character. But when it says things like "as a large language model, I am unable to love," it is not exhibiting true grounded knowledge about itself any more than when Character.AI says "As Darth Vader, I shall destroy you with the power of the dark side of the Force!"

Mastodon

a Schelling point for those who seek one