I’ll be the first to admit that gaining an understanding of A.I. and Machine Learning (M.L.) has been frustrating. For me, it’s because I’ve always had a need to know how things work.
With my car, it’s really simple. I open the hood and there’s something there. When A.I. suddenly can’t recognize a sheep (more on that later), I can’t just pop the hood and poke around.
What cleared up my frustrations with A.I. was discovering that it doesn’t have to be hidden to work. In fact, this idea of explainable A.I. opens up a whole new world of discovery.
All A.I. is just algorithms – math. In the case of neural nets (which use hidden A.I., by the way), the algorithms work their magic – one trial at a time – to find the best path through the data.
The results often leave even the best researchers scratching their collective heads as they try to understand why the algorithm behaved the way it did. As it turns out, A.I. and M.L. algorithms often don’t do what we expect – and certainly arrive at some strange and sometimes dangerous conclusions.
Humans Programmed It, SO Isn’t There An Easy Fix?
Let me back up a bit before we address how to fix A.I.’s seemingly random and malicious behavior. In order for A.I. to work reliably, one has to define a narrow problem.
This starts to get into the realm of opinion, politics, and religion if you make these kinds of statements in the wrong company! I’m going out on a limb here repeating what I’ve learned over the past two years about A.I.
The bottom line is that “General A.I.” aka “Strong A.I.” doesn’t exist, and if it did, it wouldn’t work.
“But wait,” you say, “I’ve seen A.I. work! Fraud!!!”
I know, I said this would be contentious. What you have seen is, in fact, “Narrow A.I.” or “Weak A.I.” in action. That doesn’t denigrate it’s ability – A.I. corrects my spelling nearly continually.
Yet it still gets some of the same words wrong, doesn’t it? And heaven forbid I accept a misspelling – now, my brilliant A.I. will help me misspell that word in perpetuity.
There’s a general misunderstanding of what problems A.I. is good at and what it’s not good at.
We tend to get influenced a little too much from Hollywood and the various media outlets.
Even Microsoft has a blurb where they say their A.I. is approaching “Human Parity.” It’s not. They, of course, go no further than to make the claim because that helps the narrative that “we all need A.I.” And that, I can’t dispute.
A.I. is doing some really great things, and those of us here at iig who use A.I. regularly with Grooper really are seeing fantastic results.
But it’s not magic. It’s an ALGORITHM. Math. The predictable, concrete, logical, and infallible nature of mathematics is what’s behind A.I. And this is where we run into trouble. We’re trying to use math to simulate human cognition. Humans aren’t logical, even when they think they are!
What A.I. ends up being good at is repeatable, predictable tasks that are well (narrowly) defined.
“This looks like that” works very well. Except when it doesn’t. You see, the algorithms are designed by humans, and they have errors and bugs in them as well.
As one story around computer vision goes, a neural net (A.I.) was trained to spot sheep. Virtually all of the training data set was sheep in a field, because well, sheep are usually found in fields, right? So the A.I. claimed it was working – it was finding sheep.
But, as it turns out, A.I. doesn’t understand. It just finds things that look like the things it was trained with. In this case, when a sheep was shown to the A.I. all by itself, it wasn’t detected. But if a green field was shown to the A.I., it detected a sheep.
The A.I. had associated the field with the recognition, not the actual sheep. This is very common with A.I. as it turns out. The definition of neural networks is that they have “at least one hidden layer.”
It’s the hidden part that gets you. You feed the A.I. and if the results aren’t what you expected, you don’t get an explanation of what went wrong. You get “no sheep.”
When you’re using A.I. to achieve business outcomes, it makes sense to ensure the A.I. system you’re using exposes the A.I. so you can tailor the system for maximum productivity and output.
Here at iig, we pride ourselves in the use of transparent A.I. We use the TF/IDF algorithm quite a bit, and when we do, we show the results in real-time. We show the rankings of each term, field, etc. on the document so you know what’s happening.
And we go one step further. We engineered the software with configurable A.I. Trained users can adjust and tune the algorithm to maximize results – and as always – show the results real-time for rapid testing and deployment.
Some people call it innovative. We’re just calling it Explainable A.I. Why would you want it any other way?