Or was it that the error in prediction was just an outlier, that companies and industries on average correctly predict the importance of safety & security, and this was just an outlier.
Or is this a common occurrence? Then one might chalk it up to (1) information asymmetries (normal users don't value the importance of software security, let alone evaluate the quality of a given piece of software) or (2) information problems in firms (managers had a personal incentive to cut corners on safety).
This is, of course, in the context of the development of AI, and the common argument that "companies will care about single-single alignment".
The possible counterexample of software security engineering until the mid 00s seemed like a counterexample to me, but on reflection I'm now not so sure anymore.
Thanks! This is useful.
My impression was that for very big companies and especially industry customers it has gotten better—Microsoft improving the security of Windows, Google creating the AFL &c
but then otoh I can imagine that this drops off very quickly as one moves to not-top-of-the-industry players
Another reason might be that lower-level software usually can make any security issues a reputational externality for end-user software: sure, in the end Intel's branch predictor is responsible for Meltdown and Spectre, and for setting cache timeouts too low that we can nicely Rowhammer it out, but what end-user will blame Intel and not "and then Chrome crashed and they wanted my money".