I started in AI R&D in 1991 at Allen-Bradley followed by three years in the AI Development Group at Eaton-Cutler/Hammer. I learned three lessons that have proven to be immutable truths over the past 30 years:
1. AI is Biased - thankfully this is no longer an issue hidden on the underbelly of the beast, and the truth is that likely all algorithms are biased. There are many biases built into training data, including the decision of what training data to include and where to source it. Be aware that finding uniqueness, that diamond in the rough, is highly unlikely if the data set includes no prior examples.
2. AI Cannot Predict the Future - There's a specific bias many overlook: time. AI is biased entirely in favor of the past; it cannot predict the future. Instead, what we call "predictions" are really just the recognition of patterns from the past, extrapolated into a guess about the future. Keep in mind the hot hand fallacy because AI often doesn't. Probability itself is based on a biased assumption that we can predict the likelihood of future events, and being close, even often, is not the same as being right.
3. AI Always Gives an Answer - today we call it "hallucinations", but the problem with AI giving non-sensical answers is not new. AI can be trusted, until it can't, and there's no way to know in which realm the answer you're provided sits. Given my background in plant floor automation and embedded systems, you can understand how this was, and continues to be, a showstopper in many ways.
AI matters. It’s helping us solve problems and gain efficiencies never realizable before. It's a great tool, but it's not a replacement for independent, critical thinking. And always be skeptical of any AI deployment where safety and security are paramount. Back in 1994, some of my AI research in fuzzy logic and neural networks was focused on self-driving vehicles. And we're still waiting.

No comments:
Post a Comment