Follow

Ngl yesterday at the airport I had >1% credence of "AI takeover is happening" for half an hour

@niplav I've had this feeling at least two times, but not for an entire half-hour! (I can't recall credences, but the plausibility was just emotionally salient/urgent to me.)

@niplav Anyway, um… I have question for you. 👉👈

I want to ask an AI (via system/user message) to use subscript probabilities (or another non-clumsy in-line way to do it), but I'm not sure what the semiotically-optimal option is.

- Confidence interval? Idk how to write those, or how avoid subtle noob-mistakes.

- Or maybe point-estimates are fine? In percentages or odds? Log-odds?? Or maybe "share likelihood ratios, not posterior beliefs"? Hartleys then??

@niplav Btw, claude-3.5-sonnet seems surprisingly eager to quantify uncertainty. It frequently does so even when this is the phrasing with which I request it:

"""
And when no specific answer is available, I'd really appreciate if you quantify/bound your uncertainty. : )
"""

Whole system prompt below, in case usefwl to you. I'm embarrassed about not yet having invested the warranted amount of time&effort in optimizing&personalizing the upfront context I provide it, but… TODO.

@rime I think point estimates are totally fine, and hard to mess up.
I still go with percentage-space most of the time because my beliefs aren't *that* strong in most cases.

And likelihood ratios would be used if you're updating a proposition
based on some evidence (where you need *both*)—seeing E updates H by 2 shannon (base-2 supremacy, sorry :-D)

I now wonder whether notation is useful for the update case…
(started at niplav.site/subscripts.html#Share_Likelihood_Ratios_not_Beliefs)

@rime CIs work if your belief is over some real-valued quantity out there in the world. If it's a probability on a binary event (will X/won't X) I *think* a CI is not necessary—at least nobody has been able to come up with a good argument why a probability distribution over belief on binary propositions doesn't just "integrate out". (Infrabayesian shenanigans aside.)

When I want to still have a CI on a binary proposition I guess noting
parameters for a Beta-distribution would work?

@rime Used that in niplav.site/china.html, but takes some intuition to build up.

So, TL;DR: I think probabilities expressed as percentages are a-okay 🙂

@niplav thing about credal intervals is that they communicate smth abt VoI too. if I "think [1-99%]" there are cookies in heaven, I'm saying smth like "I *cud* end up w credence at 1% or 99%". ("credal resilience" / "〃 sensitivity")

but mby cud be specified w mode-credence & meta-credence like…

"I think_{90% {⧉60%}}"

"P(H)=90%, but P('my P(H) will change by ±0.5 in a year') = 60%."

@niplav for credence over non-binary outcomes, and when needed, cud standardize a 4-tuple like (
P("x ≤ 25%"),
P("25% ≤ x ≤ 50%"),
P("50% ≤ x ≤ 75%"),
P("75% ≤ x"))

but, uh… I do not expect_99% to find a significantly good and practical use-case for this.

@niplav > "When I want to still have a CI on a binary proposition I guess noting parameters for a Beta-distribution would work?"

I didn't notice this... params for a distribution sorta works, but... computationally complex to build up to, if I just hv a visualization of a p-distribution in my head? at least, idk what a Beta-dist is, or how to build one ottomh / sfth.

"ottomh" ↦ "off the top of my head"
"sfth" ↦ "shooting from the hip"

@niplav base 2 is just superior. 🤝

also, re "share likelihood ratios, not beliefs", I like my comment as a quick demonstration of the dangers of doing the opposite.

the essence is just that—ideally—ioto avoid accidental double-counting when updating on testimony, u want to say ➀ [exactly which ~personally-independent observations u have], and ➁ [quantify the evidential weight (for some H) of those observations in u's own interpretation]. computationally costly, tho…

"ioto" ↦ "in order to"

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one