The combination of deep reinforcement learning, multi-agent learning, and imitation learning have driven advances in the use of computers to solve problems. But most of these problems such as winning a chess game, vacuuming a floor or detecting signs of disease in X-rays of human organs are all tasks that are well-defined and very specific in their competencies. Although the speed at which these tasks can be accomplished are baffling to our human sense of time, the machines can do nothing more than they are programmed to do. This is called Artificial Narrow Intelligence (ANI) or Weak AI (ignoring the more precise definition of Weak AI for now). All current “AI” products and services you can find today belong to this definition.
However powerful and however human-like in execution, these programs are most certainly not like a human mind. A human mind can be asked to, “make this room look nice,” and determine which areas need vacuuming (by an autonomous robot perhaps) and which areas need a more general re-arrangement of objects. The problem solving is multi-dimensional and not clearly defined with an outcome that is highly subjective. This is what a human mind handles very well.
Defining these systems as “narrow’ or “weak” in not way means they are simple or incomplete in some way. The models behind them are very powerful. When we use linear regression methods such as Ordinary Least Squares or Gradient Descent, we can predict waiting time in call centres or the price of an airline seat. A Naïve Bayes model can be used to make an email spam predictor and matrix algebra makes Natural Language Processing possible. If these are so powerful and rely on basic statistics and some simple algebra, why are papers filled with angst and handwringing? Aren’t these models super intelligent?
Unfortunately, the possibility of inadvertent mistakes are very real. Very few companies would allow outsiders to look at the exact variables used to train models and due to the complexity of some models, it is difficult for humans to discover the variables used. An example comes from a deep learning model that was trained to recognize camouflaged military vehicles. A very precise RNN model was trained on photos and then used in real life, where it failed miserably.
What possible could to problem have been? As it turns out, the models were registering the shadows thrown off by the vehicles but in real life much of live video feed was taken on cloudy days. Thus, we can see that the models were very “accurate” but the task that they performed was not the task that the humans required. Fortunately, a rigorous testing and analytics team was able to review the test results before the system was take operational. Much of our use of AI falls into this problem area. Without critical analysis and deeper understanding we may be using accurate systems for the wrong problems.
So how do we ensure that systems work as we wish? All enterprises need to ensure a rigorous testing and sandbox environment. In order to foster work on ANI in solving many of the tasks, enterprises should make available dummy data and test environments for suppliers to showcase their ideas and products. Even if you may not get a peek under the hood, the dummy data set will allow you to create a test environment that is similar to your own company needs. The variables used to construct the algorithm may be hidden from you but you can ensure fewer “surprises” if you allow suppliers to test on a data set that is reflective of your company needs.
ANI is the state of the AI world at the moment. Very powerful algorithms are put to work solving specific tasks and routine problems. Although the math behind these products is simple, the complicated construction of the models means that all the variables may not be available to view. A robust testing system will help ensure the outcomes you want are solved by the products your use.
Stay tuned for our next entry where we explore A.G.I and how we use that in today’s world.