Show newer

1. OpenAI's "release early" plan is not insane
2. LLMs will become agents because it's profitable and humans will deliberately make it into one
3. "The public" seems pretty receptive to the idea of AI risk
4. Doom is more likely to be humans plain instructing the AI to recursively self-improve, or plain destroy the world
5. Once some sentient AIs are released, humans are likely to try to torture them, and we need to set up guardrails asap.

lesswrong.com/posts/3DyXQkkkGn

> “At the time, science had declared humans unique, since we were so much better at identifying faces than any other primate. No one seemed bothered by the fact that other primates had been tested mostly on human faces rather than those of their own kind.”
-- Are We Smart Enough to Know How Smart Animals Are?

This, but in the context of Shoggoth, and how people think human benchmarks (e.g. "can it do human math?") are relevant for measuring GPT-4's true intelligence.

^ At present, moderately healthy subcultures are able to survive because people vote with their feet (or find virtual bubbles) and more or less isolate from the rest of the world. Their survival depends on being visible enough that kindred spirits from afar can find them, but not so visible that belligerent cultures start to target them specifically with absentee punishments.

Show thread

^ When cultures give Moloch free reign to shape them in His service, you quickly see them adopt the following rituals.

- Meta-level judgments: people are punished or rewarded according to what judgments they reveal.

- Absentee punishments: people are punished for insufficiently contributing to the meta-level judgments.

It's worth noting that correlated memeplexes with the above features will tend to win out over cultures without them.

Show thread

^ The negative aspect is that these reversals are often tangential to what anyone at all would prefer for themselves. As long as the core mechanics underlying pluralistic ignorance stay in effect, the culture will arbitrarily Abilene-paradox itself to places nobody wanted to go.

Show thread

^ One positive aspect of living in a Keynesian culture is that norms can change at the blink of an eye, since it basically shares its underlying mechanics with what causes bank runs. If smart money says that other people will start predicting that other people will predict being gay is just fine, then people will start selling their investments in anti-gayness asap. (With the exception of subcultures where social status derives from reversing other subcultures.)

Show thread

If everybody looks to their peers to decide what to think about a person, the game implicitly turns into a Keynesian beauty contest, where popularity is determined by what others predict others predict others predict (and so on) other people like.

rime boosted

I just donated $1500 AUD to help combat malaria.

Because I donated via my local tax deductible EA organisation it doesn't appear on the fundraising page. So in lieu of being able to make the number go up, I'd like some praise please 😇.
---
RT @givingwhatwecan
Devastatingly, malaria kills one child every minute. We have just launched our campaign to combat this serious disease.

You can help save lives by donating to our
twitter.com/givingwhatwecan/st

Whatever this is, it's obvious that this is where it had to be. I often dump my thoughts on an unsuspecting social media site because I it helps me remember them. As such, I rarely bother pointing out caveats or nuance that I know I'll obviously remember. This may or may not offend people terribly, but consider this my excuse.

Mastodon

a Schelling point for those who seek one