Show newer
niplav boosted

bro you're not scaring the hoes at all. the hoes are actually developing an unassailable confidence and ruthless clarity of purpose that i'm finding quite alarming

@wolf480pl Neural Networks are able to predict the "future"[1] better than chance: arxiv.org/abs/2206.15474

This scales with size, and is still far below human baseline

[1]: actually doing pastcasting

if you're german then the term "gaslighting" is slightly queasy

can you money-pump the agents simulated by large language models?

give standard vNM coherence violating scenarios to smarter language models

at higher levels of intelligence, maintaining coherence becomes more difficult since your action space widens and having high coherence might be NP-hard. so the better metric is coherence divided by size of action space

@WomanCorn sounds like a good idea! doubt it'll help in any way in the limit (also epistemic contamination is a bitch), but i'll cheer the people who try

anti-utilitarian screed 

@genmaicha naturally i disagree with this a bunch :-D

do you want a more in-depth answer or nah

niplav boosted
niplav boosted

i keep thinking about "slow is smooth, smooth is fast"

@agdakx Doing God's work 🙏

Go nerd-snipe them, regal person 👑

niplav boosted

onrushing tide, a mere
fifty miles away, and just
our puny channels

should i start a podcast

Disappointed that [1] doesn't actually check for common vNM violations. This must be assuaged

[1]: sohl-dickstein.github.io/2023/

Show older
Mastodon

a Schelling point for those who seek one