I'm reading the 1989 book "The Reliability of Expert Systems" from Erik Hollnagel, and I find it intensely fascinating how many of the current problems of "AI Blackbox Says No" were already present then, and how few we have fixed or even learned from.

Entire classes of fundamental problems have just been shrugged off, like "humans like thinking with symbols, computers are all about numbers, how do you translate between the two?" have also just vanished from discussions. It's weird to read.

Follow

@mordecai > "humans like thinking with symbols, computers are all about numbers, how do you translate between the two?" have also just vanished from discussions

still exists but under the term mechanistic interprability. asks the question whats going on in the neural net and how did it get this result

still has lots of open problems

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one