Show newer

The last points especially might be ameliorated by literally just appending "and don't optimize too hard" and "let yourself be shut down by a human" to the prompt?

Man I feel confused, but assuming that language models aren't infested with inner optimizers now I'm more hopeful?

Or am I missing something crucial here…

Show thread

• Last point especially crucial in situations where such an agent starts recursively improving itself (e.g. training new models)

Show thread

Thinking out loud what still doesn't work with giving AutoGPT agents instructions like "do X but respect human preferences while doing so".

• Inner optimizers are still a problem if they exist in the GPT models
• Do LLM agents have sufficient goal stability? I.e. when delegating & delegating further does the original goal get perturbed or even lost?
• Limited to the models' understanding of "human values"
• Doesn't solve ambitious value learning, model might generalise badly once in new domains

Hm maybe Hodge decomposition can be used to define the goal-directedness of a system?

If your system is in loops, it's not accomplishing much, but the potential part also needs to be high (rocks have no loops but also no direction)

How many people in the 50s knew about John von Neumann? Very few I reckon

Show thread

Since tails come apart, you probably don't know the relevant polymaths in our world and know the gifted communicators much better.

niplav boosted

How many different ways can 4 equal circles be linked in 3d space?

-not counting solutions composed of multiple separate links
-no touching or crossing of the circles
-true geometric circles only, not elongated or distorted
-considering topologically equivalent arrangements to be the same

How about 5 circles? Has someone already catalogued these?
I've seen some enumerations of planar arrangements, and link tables allowing non-circular loops, but didn't find yet one for circles in space.

On the object level, this means that I should take climate change people more seriously out of cooperative spirit even tho I don't particularly believe their object level arguments

As partially causal cooperation with worlds where they are infact right or sth idk

Show thread

So how do you navigate this dilemma? People can't just disagree but avoid each other, setup implies large externalities.

Show thread

So, how *do* you engage in a conflict where one side is trying to avoid apocalyptic but unobservable behavior, but everyone else doesn't believe their arguments?

We might do that with money, but feels insufficient. Assume evaluating object-level arguments is really really difficult here.

Rarely doomers could be right.

"I read all your fanfictions."
"Bet with me on the claim X you made."
"No."
"Then you are not of our culture."

One of my hot takes is that game theory is basically useless

niplav boosted

Ali Maow Maalin was the last person to get smallpox before it was eradicated.

He was cured from it in 1977 and made a full recovery.

In the 1990s he was a local coordinator in the fight against Polio in the region, where he spent years traveling around, distributing vaccines and educating the population.

In 2013, he was again campaigning in the region after Polio had been reintroduced, but fell ill with a fever.

On July 22nd 2013, he died of Malaria.

niplav boosted

"!4$ is sometimes used as a shorthand version for [bang for the buck]"

:blobcatlul:

niplav boosted

im such a prawncoded krillmaxxing crustaceanpilled shrimpcel

niplav boosted

Gonna start calling frogs "mudpuppies"

Show older
Mastodon

a Schelling point for those who seek one