If you read AI research papers and books available online, you’ll see many highly sophisticated answers from people with multiple PhDs. Here’s my condensed version without the 20 to 50-page paper or 1109-page text book. AI are human-like systems.
At least they aim to demonstrate as human-like intelligence as possible. Though many of today’s AI research scientists might argue AI is most valuable when it can surpass human intelligence. There is much debate about what constitutes “intelligence”. To be fair, it’s not as easy as it would at first seem to define intelligence when it comes to computer systems.
Merriam-Webster defines intelligence as: (1) the ability to learnor understandor to deal with new or trying situations; also the skilled use of reason, and (2) the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests).
Considering that definition, there are AI systems that certainly meet some of the criteria but not all. The hallmark of today’s AI is that it can “learn” at ever-unprecedented rates. The example that pops to mind is AlphaGo. It learned to play the strategy game Go from a database of over 30 million expert-based Go game moves and strategies in weeks. It’s worth noting that it learned from human experts’ moves. Another interesting aspect of the Webster definition of intelligence is “to deal with new or trying situations”.
Most AI systems and machine learning algorithms work off of trends and patterns, which by definition are based on information and situations that have already happened. In other words, not new and certainly not complex enough to be considered “trying”. When AI programs are put into situations that they do not have a training set for, they do not know what to do without human intervention.
Back in 1950 Alan Turing devised a test in hopes of answering, “Can machines do what we can do?”. Some AI scientists argue that what he was really trying to discern was “Can computers think?”. The test was based off of a party game called the imitation game. A man and woman would go into two separate rooms and they would try to answer questions from an “interrogator” in a way that would fool the interrogator into believing he/she was the other person. After each answer to each question was delivered via handwritten note, the interrogator then had to determine if it was the man or woman’s answer.
In the same way, Turing’s test would have a human questioner, human answerer, a computer answerer – a person who could type the question into the computer and represent the computer’s answers back to the human questioner. The human questioner would then determine if the answers were given by a computer or a person.
Many a research scientist has challenged Turing’s test and still do to this day. One such challenger is philosopher and Berkeley professor John Searle. He proposed the Chinese Room experiment which posited that just because a computer can emulate a person’s answers does not mean that it can understand the answers it gives or therefore, “think”.
While at IBM, I worked on standardizing implementation methods for cognitive computing services engagements; and the following is what I learned. If you break down how humans operate; then, cognitive computers should be able to:
1) understand like a human,
2) learn and reason like a human,
3) decide like a human and
4) interact and act on decisions like a human.
Maybe you are thinking this sounds simple only on the surface. Humans have lots of other things going on behind the scenes during these "processes”. I couldn’t agree more. When we really dissect how humans think, we have to consider that some humans think simply and linearly with as few variables as possible (almost like a computer); while others are using a lot massive parallel processing on many factors at once to arrive at decisions.
Some people will more heavily rely on facts presented at the exact time they make a decision. Others may take in more information based on cues not just from the immediate situation, but also pulling on a wealth of past experiences, social norms, consequences, habits, rewards, and fears. Research indicates that over 45% of decisions are guided by habit. Other research points out that 95% of decisions are made subconsciously.
Using myself as an example, most of my “logic” is going on without my conscious knowledge of it. I can often articulate answers about choices but I’m not good at explaining the reasons for those answers because much of the time I’m operating off of nuanced information, inconsistent emotion, and “gut feel” that I would be hard-pressed to explain to myself much less a computer. But it turns out most humans do this. There is even a particular part of our brain called the basal ganglia specifically designed to help us not be overwhelmed by making decisions every day.
That acknowledged, in another upcoming series, we will dive deeper into this topic to see if we can make enough sense of what it means to think and act as a human so that we can understand how these concepts work relative to AI systems. As a few spoilers, based on the research in, “The Power of Habit” by Charles Duhigg, I can tell you that a lot of our thinking is autonomic - based on decisions we have encountered before. That is why we can do so many things at the same time.
But when we are confronted with new decisions, our strongest reactions to new stimuli often are made out of fear not reward. So far, we only have ways to simulate reward for AI, I do not know that we have the ability to create fear in an AI.The question is how far should we go to make AI human-like in order make it truly intelligent? Does it need fear to have human-like intelligence? Would a sense of empathy and social norms as developed from fear (loss of life, loss of job, loss of reputation, not fitting in with other humans, physical pain), help AI to make the same decisions we would make? Is it important for AI to be human-like in order to exist with us in ways that are beneficial to us?
I'd love to hear your thoughts. Please post a comment below. If you have any questions that you do not feel comfortable posting in this forum, please feel free to contact me direct at: Cortnie@AITruth.org.