I operate by Crocker's rules[1].
[1]: https://www.lesswrong.com/tag/crockers-rules
on reflection disinviting AI researchers from parties and conferences is not an effective way of stopping them in developing AGI
first you learn quicksort and mergesort, then you go back and learn insertion sort and selection sort. then, you learn heap sort.
and you never learn bubble sort.
this is the "machete order" for sorting algorithms.
note to self: do not use a ping twice with the same person
Keith is an incurable romantic
“If you really cared about this AI thing, you’d replicate Ted Kaczynski’s successful strategy.”
RT @TetraspaceWest@DvnnyyPhantom
Grand Futures (non-lobotomized)
what if we held hands 👉👈 🥺at the Sloan Great Wall and listened to the pulsars sing
I notice I can just *not engage* with the Discourse and do my job
blahaj is mid
this person: https://en.wikipedia.org/wiki/Lillian_Alling
"the transatlantic ferry won't give me a perk? okay i'll just walk to poland then. bering strait here i come"
you are trying to solve the right problem :) with the right methods :) based on a correct model of the world :) derived from accurate thinking :)
Hot take: Making poorly calibrated public predictions is pretty awesome.
Not only can you learn from them, but it's fantastic that you were willing to predict something concrete clearly enough to actually be wrong - which is unfortunately rare.
genetic determinism is largely true
nonant
Large Language Model Denies Wifebeating
Ouroboros can have a little Ouroboros as a treat
like this post if you'd do it for free
how much money would i need to give you for you to take a pill that makes you insane?
a Schelling point for those who seek one