Show newer
niplav boosted

in the alternate universe where we took the biopunk tech tree option, “chinchilla scaling” is way cooler

niplav boosted

If there was an account that was “animals that go hard” I would be one of them

Show thread

71f453558de50865fb6feec836f4ed868542664afac639bc070304259418bc52

niplav boosted
it sucks that the English cannot distinguish Schloss, Burg, Feste, Herrenhaus and Rittergut.

implementing Augmented Lagrangian because you want to: cozy, relaxed

implementing Augmented Lagrangian because your degree requires it: nausea-inducing, terminal ugh-field

doing it anyway in a 3 hour haze of nicotine-fueled parameter-tuning: self-transcending, supremely agentic

What is true is already so.
Owning up to it doesn't make it worse.
Not being open about it doesn't make it go away.
And because it's true, it is what is there to be interacted with.
Anything untrue isn't there to be lived.
People can stand what is true,
for they are already enduring it.

lesswrong.com/s/5bZZZJ5psXrrD5

niplav boosted
niplav boosted
niplav boosted

One day ladies will take their computers for walks in the park and tell each other, "My little computer said such a funny thing this morning".

-Alan Turing
(1912-1954)

when will we have the first LessWrong post with >1k karma?

Is Cowen's 2nd law ("there's a literature on everything") basically true?

(I.e. if you're not an expert in a certain domain, you're unlikely to come up with a question that the human intellectual endeavour hasn't tackled already)

This is, of course, in the context of the development of AI, and the common argument that "companies will care about single-single alignment".

The possible counterexample of software security engineering until the mid 00s seemed like a counterexample to me, but on reflection I'm now not so sure anymore.

Show thread
niplav boosted

I wished all these newletters were just blogs

Another reason might be that lower-level software usually can make any security issues a reputational externality for end-user software: sure, in the end Intel's branch predictor is responsible for Meltdown and Spectre, and for setting cache timeouts too low that we can nicely Rowhammer it out, but what end-user will blame Intel and not "and then Chrome crashed and they wanted my money".

Show thread

Or was it that the error in prediction was just an outlier, that companies and industries on average correctly predict the importance of safety & security, and this was just an outlier.

Or is this a common occurrence? Then one might chalk it up to (1) information asymmetries (normal users don't value the importance of software security, let alone evaluate the quality of a given piece of software) or (2) information problems in firms (managers had a personal incentive to cut corners on safety).

Show thread

I remember (from listening to a bunch of podcasts by German hackers from the mid 00s) a strong vibe that the security of software systems at the time and earlier was definitely worse than what would've been optimal for the people making the software (definitely not safe enough for the users!).

I wonder whether that is (1) true and (if yes) (2) what led to this happening!

Maybe companies were just myopic when writing software then, and could've predicted the security problems but didn't care?

big next project: should I

1. do the Overcoming Bias bounty[1]
2. write something about attention spans for this[2]
3. "finish" a library of forecasting datasets[3]
4. run one (1) nootropics for meditation self-blinded RCT

[1]: lesswrong.com/posts/QaDwBio8ML
[2]: slimemoldtimemold.com/2023/01/
[3]: github.com/niplav/iqisa

(lowercase because I will take this as mere suggestion)

A slowly solidifying feeling that science generally doesn't answer the types of questions I'm interested in

Show thread

Questions I've not been able to answer with 10 minutes of websearching:

What is the relation between the population size of a species and the longevity of that species?

Show older
Mastodon

a Schelling point for those who seek one