wow thanks for all the likes everyone
anyway donate to @GiveDirectly
https://www.givedirectly.org/
Thanks! This is useful.
My impression was that for very big companies and especially industry customers it has gotten better—Microsoft improving the security of Windows, Google creating the AFL &c
but then otoh I can imagine that this drops off very quickly as one moves to not-top-of-the-industry players
@snacks
nice
@vaartis
aye mood af
@pseudoriemann crab
This is, of course, in the context of the development of AI, and the common argument that "companies will care about single-single alignment".
The possible counterexample of software security engineering until the mid 00s seemed like a counterexample to me, but on reflection I'm now not so sure anymore.
@snacks interesting! this feels unintuitive to me: if you have a distribution of random local events of different sizes that can wipe your species out, have the species be distributed widely needs especially big (and therefore rare) catastrophes to be wiped out.
do you remember where you have this information from?
Another reason might be that lower-level software usually can make any security issues a reputational externality for end-user software: sure, in the end Intel's branch predictor is responsible for Meltdown and Spectre, and for setting cache timeouts too low that we can nicely Rowhammer it out, but what end-user will blame Intel and not "and then Chrome crashed and they wanted my money".
Or was it that the error in prediction was just an outlier, that companies and industries on average correctly predict the importance of safety & security, and this was just an outlier.
Or is this a common occurrence? Then one might chalk it up to (1) information asymmetries (normal users don't value the importance of software security, let alone evaluate the quality of a given piece of software) or (2) information problems in firms (managers had a personal incentive to cut corners on safety).
I remember (from listening to a bunch of podcasts by German hackers from the mid 00s) a strong vibe that the security of software systems at the time and earlier was definitely worse than what would've been optimal for the people making the software (definitely not safe enough for the users!).
I wonder whether that is (1) true and (if yes) (2) what led to this happening!
Maybe companies were just myopic when writing software then, and could've predicted the security problems but didn't care?
big next project: should I
1. do the Overcoming Bias bounty[1]
2. write something about attention spans for this[2]
3. "finish" a library of forecasting datasets[3]
4. run one (1) nootropics for meditation self-blinded RCT
[1]: https://www.lesswrong.com/posts/QaDwBio8MLqRvTREH/usd10k-bounty-read-and-compile-robin-hanson-s-best-posts
[2]: https://slimemoldtimemold.com/2023/01/01/mysterious-mysteries-of-unsolved-mystery-call-for-entries/
[3]: https://github.com/niplav/iqisa
(lowercase because I will take this as mere suggestion)
A slowly solidifying feeling that science generally doesn't answer the types of questions I'm interested in
@hidden @ai @grips @MercurialBlack
№7, very comfy
I operate by Crocker's rules[1].