Show newer

@niplav OH MY GOD they added phoenix

Recipe: `🐦` + `Zero Width Joiner` + `🔥 ` = 🐦‍🔥 (Not supported on mastadon it seems, but works in gsheets!)

> "Approved in September 2023 as part of Emoji 15.1. Available via the latest Samsung devices, Google's Noto Emoji fonts, and iOS 17.4. Coming to more platforms throughout 2024."

I swear I've been looking for this when it didn't exist.

@niplav Btw, claude-3.5-sonnet seems surprisingly eager to quantify uncertainty. It frequently does so even when this is the phrasing with which I request it:

"""
And when no specific answer is available, I'd really appreciate if you quantify/bound your uncertainty. : )
"""

Whole system prompt below, in case usefwl to you. I'm embarrassed about not yet having invested the warranted amount of time&effort in optimizing&personalizing the upfront context I provide it, but… TODO.

@niplav Anyway, um… I have question for you. 👉👈

I want to ask an AI (via system/user message) to use subscript probabilities (or another non-clumsy in-line way to do it), but I'm not sure what the semiotically-optimal option is.

- Confidence interval? Idk how to write those, or how avoid subtle noob-mistakes.

- Or maybe point-estimates are fine? In percentages or odds? Log-odds?? Or maybe "share likelihood ratios, not posterior beliefs"? Hartleys then??

@niplav I've had this feeling at least two times, but not for an entire half-hour! (I can't recall credences, but the plausibility was just emotionally salient/urgent to me.)

@niplav Also, I feel like this thread started out with a tone that makes it appear like I was contradicting you. But I meant to confirm the thing you meant, while providing nuance re the sources of that non-locality (bon my current models).

@niplav ¹DCL ~is um.. smth like the expected amount by which two randomly chosen neurons have computationally-connected activity-levels at any given time, weighted by the distance between them.

…or smth. I'm trying to translate the concept from memory from where I saw² it defined for Ising models, and I think I failed.

²
youtu.be/vwLb3XlPCB4?si=MZzvj-

@niplav Globally-sensitive is another way to say critical brain hypothesis: the idea that the brain is constantly tuned for maximum Dynamic Correlation Length¹, which is achieved by maybe-something-like regionally renormalizing activity-levels so it borders "criticality" (ie, closeness-to-phase-shift) all the time.

Neuronal activity on several measures is lognormal, so a significant fraction of spikes have much larger effect on the rest of the network compared to others.

@niplav Or so says the module that is writing those words into a comment. By the time information from perceptual fields (by which I mean first-responders to external stimuli) becomes salient enough for you to *think about it*, it has already been synthesized with / filtered against everything else.

But yes, the field-as-perceived-by-us (where "us" refers to the obvious things I mean of course, whatever those are) is really globally-sensitive.

Could be interesting to combine it with a prompt like

> "Hello AI! Firstly, what does it look like I'm doing? Secondly, do you have any particular information you think I ought to be aware of wrt what it looks like I'm trying to achieve? An easier way I could go about it? Stuff that I don't know that I don't know, so can't even check for?"

Show thread

Not sure what I wish to do with this information, but I do note that having an AI ~constantly scanning my monitor to try to infer what I'm up to is well within price-range.

Show thread

Processing one full screenshot per 5m via claude-3.5-sonnet costs ~1.35 USD per day, excluding output-tokens. ~Same price for gpt4o & 〃-mini.

docs.anthropic.com/en/docs/bui
openai.com/api/pricing/

"Hi everyone, I'm Tsvi-Benson Tilsen. Hope I'm saying that right."

the top shelf in any of my libraries (eg 'to-read', 'to-watch', 'to-listen') is always labelled "review". …I wish it were as shiny as the rest.

Show thread

new things are ~always shiny in excess of their value. the Nth time we review a memory is worth expontentially more than the (N-1)th time, bc ➀ the Forgetting Curve, and ➁ total epistemic profit (the "rent paid" by the memory) depends on its longevity

Show thread

we overestimate how much we remember, bc
➀ metamemories produce strong feelings of recognition (eg, I recall *that* I've read abt X ~indep of *what*), and
➁ survivorship bias wrt what we can even check (ie, we can't spot-check the uniform dist of what-we-used-to-remember)

@niplav TO BE CLEAR I'm liking this because it's a nice display of private-thoughts-in-public, not intending to communicate anything beyond that. I mean it!

@niplav It all adds up to normality. It may or may not add up to wayyy more than that too, but at least it can't add up to anything *less* than normality.

Just hope it doesn't mess with the ontology upon which my ethics depends. Reality done that too many times already. ❤️‍🩹

In a sense, it's my ethics which holds things together. I say "ouch", and I know all else has to adjust to accommodate the fact that I care about whatever-that-was.

So if we can compress the world as much as we already have, using tools (brains & GNW) for which there we have strong reasons to expect are very limited, that suggests to me that there's a wealth of untapped simplicity beyond the horizon. But it can only be accessed by top-down non-myopic deliberate practice.

Show thread

One reason to expect brains to merely be scratching the surface is the deep inevitability of pleiotropy and build-up of technical debt (increased refactoring-cost) for all hill-climbing algorithms. "Consciousness" (i.e. global neuronal workspace) is the only adaptation on the planet that even marginally addresses it. But it too is powerless against the tide.

Show thread
Show older
Mastodon

a Schelling point for those who seek one