Show newer

ok but hear me out

one hundred billion dollars into mechanistic interpretability

Once 50% of all knowledge-work has been automated, how long in expectation until 90%+ of all work has been automated?

I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.

Show thread

I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.

I like to think
(right now please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.

The Amish should be able to continue doing whatever they're doing

niplav boosted

the map is not the territory. for one, it takes a lot fewer soldiers to occupy the map

Nyfb vs nag yvirf ner pheeragyl arg artngvir gurz tbvat rkgvapg jbhyq or n tbbq guvat.

Show thread

my intuition: V guvax nagf jbhyq or rvgure rkgvapg be bayl nyvir va mbbf &p. Gur rpbabzl gura jbhyq or ~43 beqref bs zntavghqr ovttre, naq V guvax n ybg bs gung jbhyq'ir fgvyy unccrarq ba rnegu. Uhznaf qba'g pner rabhtu nobhg nagf gb xrrc gurz nyvir gung zhpu (rfcrpvnyyl fvapr jr'q rkcrpg uhzna qrfpraqnagf gung sne vagb gur shgher pner znvayl nobhg ercebqhpgvir svgarff qverpgyl).

Show thread

if economic growth continued at 1% a year, for 10k years, would that be good or bad from the position of ants in 10k years?

law of one player: nothing happens unless you (yes, you specifically) make it happen

common theme rn in alignment research is how value evolve, e.g. concept/value extrapolation, shard theory

niplav boosted

bro you're not scaring the hoes at all. the hoes are actually developing an unassailable confidence and ruthless clarity of purpose that i'm finding quite alarming

if you're german then the term "gaslighting" is slightly queasy

can you money-pump the agents simulated by large language models?

give standard vNM coherence violating scenarios to smarter language models

at higher levels of intelligence, maintaining coherence becomes more difficult since your action space widens and having high coherence might be NP-hard. so the better metric is coherence divided by size of action space

niplav boosted
Show older
Mastodon

a Schelling point for those who seek one