Home English Hebrew

 

 

Is Silicon Intelligence Just Around the Corner?

By Rafi Moor

2005

The accelerated development of the computer technology in the last decades did not bring a breakthrough in the development of artificial intelligence. Will it be the 21st century in which we will first see silicon intelligence? Shall we succeed at last in creating intelligent computers? Can intelligence spontaneously emerge in complex computers or in the Internet? This article deals with these questions.

  

The Game of Chess

The efforts to imitate the human intelligence are as old as the computer technology itself or even older. One of the first attempts was trying to teach computers how to play games. The game of chess seemed to be a good practice arena: On one hand one who can play chess well is regarded highly intelligent and on the other hand the game has well defined rules and criteria. In 1957 Herbert Simon, one of the pioneers in the field of artificial intelligence, predicted that within 10 years, a digital computer would be the world's chess champion. This prediction turned out to be much too optimistic. Three decades later, in 1988, the computer program "Deep Thought" that ran on one of the fastest computers IBM had at that time, lost in a match against the world champion Garry Kasparov. Kasparov said after the match:

"Intuition and profound ideas win chess games at the highest level, not counting."

He also said:

"There is no way a grandmaster would be defeated by a computer in a tournament before 2000."

In 1996 Kasparov lost a game but won 3 in a match with a new version of the program that had been renamed to "Deep Blue". In 1997, 40 years after Simon's prediction and 3 years before the year 2000, Kasparov was defeated by a newer version of the program called "Deeper Blue". He said after his loss:

"These billions and billions of calculations can, at one point, match my intuition…"

"I'm looking at the position and make a decision based on creativity, intuition, fantasy and a little bit of calculation; the machine is looking at the position and making its decision, which is the same, but based purely on calculation. If the situation repeats time and time and time again, we're having the same result. It's sort of intelligence because the result is the same."

But is it really a sort of intelligence or is playing chess just too simple to be regarded as a criterion for intelligence? A more complicated criterion has been suggested by Alan Turing, mathematician and philosopher who is regarded as the founder of the computer science. What is now known as "Turing Test" is based on what Turing called in a paper he published in 1950 the "imitation game": An interrogator communicates with a person in one room and a computer in another room. By asking them questions he tries to find out who is what. Turing predicted that by the end of the century computers will be able to imitate human conversation so well that an average interrogator will not have more than 70 percent chance to make the right identification after 5 minutes. This was another optimistic prediction. Up to now no computer program was even close to pass the Turing Test. Apparently small talk is much harder for a computer than playing chess. We may have underestimated the complexity and the technology needed for imitation of the human intelligence.

 

Imitating the Real Thing

Before we go on, there is one more question to ask: Suppose we succeed in creating programs that are so good that an interrogator will only have 50 percent chance to be correct (just like a random guess); not after only 5 minutes but as much time as he wishes. If a computer will be able to react exactly like a living person, will it be real intelligence or still just an imitation of intelligence?

A known joke tells of a person who is asking a circus manager for a job. "What can you do?" asks the manager. "I imitate birds" he replies. As the manager dismisses him scornfully he spreads his arms and flies out the window.

If a woman can sing exactly like Maria Callas, is she a great singer or just a very good imitator? If a man paints a painting that no expert can determine that it is not a work of Van Gogh, is it a real work of art or merely a fake? One might claim that the painting was made with an intention to imitate Van Gogh and not out of real original inspiration and thus, though the result is identical, it is not real art due to the way it has been created. But what if this piece of information is lost forever and everyone thinks it is an original Van Gogh? Is it still not a real piece of art? The question is: If there is no way to distinguish between the original and an imitation, does it make the imitation a real thing?

This is a philosophical question that doesn't have a single objectively correct answer. But one thing is clear: for any practical purpose there is no difference between the two. So, a perfect imitation of the human intelligence would be a real intelligence for any practical purpose. If it walks like intelligence and quacks like intelligence it is (practically) intelligence.

 

Lucas and Gödel

The question whether an imitation of the human brain by a machine is at all possible, is in the heart of the debate between the supporters and the opponents of the artificial intelligence. The British philosopher John Lucas argues that it is not. In his article "Minds, Machines and Gödel" he says that though a machine can simulate any piece of mind-like behavior, it cannot simulate every piece of it. He set a nice logical proof to this claim, based on Gödel's theorem. The foible of this proof is that it relies on the premise that there is an essential difference between the mind of a living creature and a machine. He presumes that while the number of types of operations that a machine can perform is always finite, mind can always perform at least one more type and thus has no limit to the types of operation. Lucas assumes that a machine can only determine that something is true by proving it according to a set of rules. A human mind on the other hand, can see that something is true even if it is unprovable, as "any rational being could follow Gödel's (unprovable) argument".

The premise that a living mind has qualities that machines cannot have, must be based on a belief in the philosophical concept of dualism - the belief that mental and physical are different in kind. Against this view stands the doctrine of materialism that says that everything that actually exists is material or physical. According to materialism everything we call mind is actually physical and chemical processes in the brain. The mental and the physical are one.

A belief is totally subjective and there is no objective way to judge how correct or true a belief is. All beliefs are equally right (and equally wrong).

Being a materialist, I believe that the brain is a biological machine. A much more complex and perfected machine than any existing computer, but yet no more than a machine. Believing this, I cannot accept Lucas' argument. If a mind is a machine too, there cannot be substantial difference between it and other kinds of machines. If the number of types of operations that a machine can perform is finite, so is the number of operation that a mind can. If a mind can decide whether something is true without proving it, so can a machine.

 

Critical Mass

Our machines just may not be strong and perfected enough for something as complex as intelligence. There is an interest argument that says that there is a threshold of complexity and computing power that must be achieved to make intelligence and consciousness possible. There is also a theory that says that once this threshold is crossed consciousness and intelligence will spontaneously emerge in such machines. It is said to be similar to critical mass and chain reaction: Once a critical mass of enriched uranium is put together, an atomic explosion occurs spontaneously without the need of any further action.

Perhaps a single computer is still very far from this threshold. But maybe the Internet, a network of millions of computers that is even somewhat similar to the neurons network in the brain, might develop intelligence and become a kind of global brain.

I disagree with this theory too, and here is the reason why: In nature we see many different levels of intelligence. There is a variety of intelligence levels among human beings. Apes are intelligent, dogs are intelligent, even mice have some intelligence. We actually see continuity in the intelligence levels starting from no intelligence at all and going up to the highest human intelligence. If the threshold theory was correct, we would see intelligence starting at a certain level and would not find any level of intelligence lower than this. Since this is not the picture in nature, the threshold theory is most probably wrong.

 

Brain versus Computer

Though (as a materialist believes) a brain and a computer are both machines, they operate quite differently. The brain is by no means a digital processor. While a computer can only process information logically, the brain has also other ways to do it. The most important of them is the capability of classification and generalization. Sometimes we are not aware of the power of these capabilities and we tent to regard the logical capabilities as the most important. The brain cannot compete with a computer, even a most primitive one, in performing logical operations. A computer can calculate, compare, sort and do many other logical operations in enormous speed and extraordinary precision. The real strength of the human brain is actually its ability not to be precise. The brain can characterize information based on inaccurate characteristics and classify it according to these characteristics. A human being can, for instance, look at a handwritten text and understand it in a fraction of a second. Though different people write the same letters very differently, the brain can instantly recognize the most general characteristics of every letter and identify it. Doing this simple classification using only logical operation requires thousands of definitions and calculations. This is why computers have a lot of difficulties when they try to read handwriting.

This capability of generalization and classification was not developed in the brain without a reason. It is very important for our survival. This is the tool with which we learn about our world and its rules.

A man goes out to the wood to search for some food. He sees a nice big animal and thinks to himself: "Mmmm, a lot of food, let's hunt it." The animal sees the man and thinks the same. The man barely escapes. He decides to call the animal "bear". The next day he goes to the wood again and meets another animal. It is a little smaller, it is black and not brown and it looks a little different. Now, if he would think: "this one might be food" he would have little chance to survive, but he can generalize and classify and instead he thinks: "A bear, let's run away." Generalization should not be too wide either. If the man will meet a dear and think: "A bear, let's run away", not only will he waste expensive energy on an unnecessary run, but he will also miss a good meal.

 

How does it "mean"?

Another argument against artificial intelligence is the claim that computers can only process the information they have but they cannot understand the meaning of it. A most famous argument is the "Chinese Room" thought experiment, introduced by John Searle in his article "Minds, brains and programs" (1980). Searle compares a computer that seems to speak Chinese and even pass Turing test in Chinese, to a man sitting in a closed room. This man has a set of English instructions that relates any possible set of Chinese symbols that are handed to him to another set of symbols which he returns. Following these instructions he can give a reasonable answer to any question in Chinese and convince anyone outside the room that he understands Chinese. A computer says Searle, works in a similar way: It can run a program that relates any string of characters that represents a question to another string of characters that represent the answer. But just as the man in the Chinese room does not understand the meaning of the questions and the answers, the computer cannot understand them either.

How do we understand the meaning of things? To understand this we first have to understand the meaning of meaning. (And here we fall into an infinite recursion because in order to understand the meaning of meaning we first have to understand the meaning of meaning…) A lot has been written about the meaning of meaning. In a famous book by this name published in 1923, the authors, Ogden and Richards, list 16 different definitions of the term "meaning". Almost all of the various definitions of meaning that were suggested during the years, fall into the general definition that says that meaning is the place of something within a system. Since a thing can belong to more than one system it can have several meanings in different levels. Naturally we need a good knowledge of the system if we want to find the place of a thing within it. But this is not enough. Usually these systems can include infinite number of elements, or at least too many to know every single one of them. How can we then find the place of a thing in such a system and understand its meaning?

In an article titled "Towards a Global Brain" (written in Hebrew) the Israeli thinker Zvi Yanay demonstrates the fact that computers do not understand the meaning of information by the following two sentences:

"The man in the window threw a flowerpot."

"The man in the wall threw a flowerpot."

Let's examine the meaning of these sentences in two levels:

First there is the linguistic meaning - the place of the sentences in the language system. The meaning of a sentence at this level is a scene that it describes. Apparently finding the meaning at this level should be quite easy. Every word in the sentence has a meaning that is an object, an action or a relation between objects and actions. All we need is to ascribe every word to what it symbolizes and find the scene. But the words "man", "window", "wall" and "flowerpot" do not symbolize specific objects. They represent classes of objects. We must understand the scene without referring to specific objects. Also, each reader of the sentence has a subjective idea of each object class, idea that depends on his own experience, his culture, and other environmental influences. Each reader's scene will be somewhat different from those of others. We can see that the meaning of a sentence is not accurate. The meaning of a sentence is something general that includes all the possible different meanings of individuals.

 Both of the above sentences have a clear linguistic meaning.

Another level of meaning of the sentences is their place in reality. Everybody can see that the scene that the first sentence describes might happen in reality, but that of the second sentence cannot. How do we know that? First we have a lot of knowledge about reality based on our own experience and on what we have learned from others. But this is not enough. The fact that we have never seen a man in a wall throwing a flowerpot cannot bring us to the conclusion that it is impossible in reality. Most of us have never seen a mouse eating in a cheesecake either, but we don't think this is impossible in reality. Also, probably no one has ever told any of us: "Look, I want to tell you something about the facts of life – a man throwing flowerpots inside a wall is totally impossible in reality." So how could we still decide that it is so? The answer is generalization and classification. From all the things that we did experience and the things we were told about reality, we could make a generalization and classify this situation as something that cannot happen in reality.

So, generalization and classification are essential for understanding the meaning of things. We know that it is difficult for computers to make such generalizations and classifications. Maybe the currently existing computers, that can hardly read handwriting, are not strong enough to make the necessary logical calculations to come to such conclusions within a reasonable time, but it is by no means something that machines will never be able to do.

 

Then why is AI stuck?

If artificial intelligence is possible and there is no technological threshold that must be crossed first, then why is the progress in artificial intelligence not nearly proportional to the development of the computers technology in the last half century?

The answer is that we are just trying to go in the wrong direction.

All the computers and programs made up to now were designed to do exactly what we plan for them to do. Every single response of the computer is fixed by the program. Computers just follow the orders of the programmers. No intelligence is needed to follow orders. All the computer needs is to be able to relate a response to a set of inputs and act accordingly.

In our attempts to create artificial intelligence we use the same approach: We try do define for the computers how an intelligent creature behaves and tell it to act this way. There is a great problem with this approach. We don't really know what intelligence exactly is and we certainly cannot decompose it and specify it as a set of detailed instructions. I doubt that we will ever be able to do so without the aid of intelligent computers. We are trapped in a catch here and there is no chance that we will achieve real intelligent computers by this way, at least not in the visible future.

But is there an alternative way? I think there is.

A baby is not born intelligent. A new born baby will not pass a Turing test or any known IQ test. But a baby is born with the tools needed to develop intelligence. It has memory, it has the ability to learn languages, it has some basic logical capabilities and it has generalization and classification capabilities. At first it has a very limited set of rules and responses: If it makes me feel bad I cry; if it makes me feel good I sleep. With time the baby collects memories and starts classifying them. New experiences are being classified using generalizations. The responses are refined. It learns about reality by experience. It learns to speak and to understand what others say. It develops intelligence.

If we want to create artificial intelligence we need a similar approach. We must create a computer with an initial program that has a limited set of rules and capabilities that will allow it to learn, to enlarge the set of rules and to develop intelligence by itself. This might be very scary. We must give up control over the developing intelligence and we cannot know what comes out of it. What if it gets smarter then us and dislikes us? We must protect ourselves and put very strict limitations on what these computers can do.

Things get even worse as we realize that much of our knowledge about reality comes from physically experiencing it. The brain, in addition to being our information processing unit, is also the control unit of our body. A child learns about the physical rules of reality by playing with toys, moving things and throwing them. Giving computers that are in the process of developing unknown intelligence control over physical devices would be a real nightmare for anyone who has ever seen one of these science fiction movies. We must find other ways to let computers learn about reality.

 

Conclusion:

We have seen (at least those of us who believe in materialism) that, in principle, there is nothing to prevent the existence of artificial intelligence. There is also no technological threshold; artificial intelligence can be proportional to our current technology. There is yet a psychological barrier: the fear from the consequences of intelligent that is not fully controlled by us. Humanity might break through this barrier some day, otherwise it must give up the dream of artificial intelligence for a very long time.