I remember (from listening to a bunch of podcasts by German hackers from the mid 00s) a strong vibe that the security of software systems at the time and earlier was definitely worse than what would've been optimal for the people making the software (definitely not safe enough for the users!).
I wonder whether that is (1) true and (if yes) (2) what led to this happening!
Maybe companies were just myopic when writing software then, and could've predicted the security problems but didn't care?
Or was it that the error in prediction was just an outlier, that companies and industries on average correctly predict the importance of safety & security, and this was just an outlier.
Or is this a common occurrence? Then one might chalk it up to (1) information asymmetries (normal users don't value the importance of software security, let alone evaluate the quality of a given piece of software) or (2) information problems in firms (managers had a personal incentive to cut corners on safety).
Thanks! This is useful.
My impression was that for very big companies and especially industry customers it has gotten better—Microsoft improving the security of Windows, Google creating the AFL &c
but then otoh I can imagine that this drops off very quickly as one moves to not-top-of-the-industry players