@niplav OH MY GOD they added phoenix

Recipe: `🐦` + `Zero Width Joiner` + `🔥 ` = 🐦‍🔥 (Not supported on mastadon it seems, but works in gsheets!)

> "Approved in September 2023 as part of Emoji 15.1. Available via the latest Samsung devices, Google's Noto Emoji fonts, and iOS 17.4. Coming to more platforms throughout 2024."

I swear I've been looking for this when it didn't exist.

Could be interesting to combine it with a prompt like

> "Hello AI! Firstly, what does it look like I'm doing? Secondly, do you have any particular information you think I ought to be aware of wrt what it looks like I'm trying to achieve? An easier way I could go about it? Stuff that I don't know that I don't know, so can't even check for?"

Show thread

Not sure what I wish to do with this information, but I do note that having an AI ~constantly scanning my monitor to try to infer what I'm up to is well within price-range.

Show thread

Processing one full screenshot per 5m via claude-3.5-sonnet costs ~1.35 USD per day, excluding output-tokens. ~Same price for gpt4o & 〃-mini.

docs.anthropic.com/en/docs/bui
openai.com/api/pricing/

"Hi everyone, I'm Tsvi-Benson Tilsen. Hope I'm saying that right."

the top shelf in any of my libraries (eg 'to-read', 'to-watch', 'to-listen') is always labelled "review". …I wish it were as shiny as the rest.

Show thread

new things are ~always shiny in excess of their value. the Nth time we review a memory is worth expontentially more than the (N-1)th time, bc ➀ the Forgetting Curve, and ➁ total epistemic profit (the "rent paid" by the memory) depends on its longevity

Show thread

we overestimate how much we remember, bc
➀ metamemories produce strong feelings of recognition (eg, I recall *that* I've read abt X ~indep of *what*), and
➁ survivorship bias wrt what we can even check (ie, we can't spot-check the uniform dist of what-we-used-to-remember)

So if we can compress the world as much as we already have, using tools (brains & GNW) for which there we have strong reasons to expect are very limited, that suggests to me that there's a wealth of untapped simplicity beyond the horizon. But it can only be accessed by top-down non-myopic deliberate practice.

Show thread

One reason to expect brains to merely be scratching the surface is the deep inevitability of pleiotropy and build-up of technical debt (increased refactoring-cost) for all hill-climbing algorithms. "Consciousness" (i.e. global neuronal workspace) is the only adaptation on the planet that even marginally addresses it. But it too is powerless against the tide.

Show thread

Are the smartest systems today (brains/AI) near the limit of how much a world-model can be compressed, or are we hitting the limit? Given all the data we observe, how many intrinsic dimensions is it distributed over?

rime boosted

@cosmiccitizen otoh, if—more realistically—my rationalization-strength wrt unendorsed motivations was merely extremely high, i wud focus on trying to understand and gain leverage over them, instead of blindly "compensating" wo understanding.

a balancing strategy like "ok, spend 30 sec thinking abt pros, and 30 sec thinking abt cons" is blind in the sense that it has no model of the enemy, which also means that it fails to generate opportunities to *learn* abt the enemy.

ju say "a 1000-bit exact specification of X," and I ask "relative to which interpreter?"

I've been misled by naive information theory for a long time. there is no representation-of-thing which "has the structure" of the thing-in-itself. information is only ever the delta btn contexts. communication is always a bridge over inferential distance.

there is vars that more or less dereference as intended, but it's Gricean all the way down.

(thoughts prob counterfactually inspired by niplav)

the Nature function herself takes arbitrary coordinates in spacetime, and returns exact distributions of matter within those bounds.

"Nature speaks differential equations" pfff!! differential equations are like gobbledygook to her, and she takes offense at the notion that this is *what she's made of*.

Show thread

> "Since Newton, mankind has come to realize that the laws of physics are always expressed in the language of differential equations."

this is wrong. differential equations is what we must *settle for* when finding the generators of the data is intractable. if u knew Nature herself, u wudn't be restricted to computing her step-by-step w infinitesimal step-sizes, u cud j interpolate btn arbitrary points w no loss in accuracy.

- the highest-avg-IQ academic subjects are mathematics and philosophy *because* they're also *less* financially profitable (thus, ppl go into them bc they're rly intellectually interested in them). the statistics doesn't seem to bear this out, but that's bc there are confounders—the underlying pattern still holds. :p

- more idr

Show thread

- if u concentrate hard on finding ideas related to X, u increase the rate at which u become aware of X-related ideas, but u also decrease the threshold of X-relatedness required for becoming aware of them. thus, if u want to maximize the quality/purity of ur X-related ideas, u may wish to *avoid* looking for them too hard. this is the advantage of serendipity as an explicit search-strategy.

Show thread

EXAMPLES:

- if u pay ppl to do X, u increase the number of ppl who do X, but u also dilute the field bc now ppl do X for monetary incentives PLUS intrinsic incentives, wheareas bfr it was only the latter.

Show thread
Show older
Mastodon

a Schelling point for those who seek one