Will computers ever take over the world? Human beings
are still superior to computers. A computer may be a fast and accurate
calculating machine, but human beings are capable of taking complex
decisions quickly, and learning from past experiences.
Take a game of chess. At a stage somewhere in the
middle of the game, there may be scores of possible moves. And then,
to be a successful player, one has to think in advance. Do you really
think that chess grandmasters go through all the moves, and all future
positions in their minds? Then again, how does a child acquire language?
Learning and past experience are used to their fullest extent in skill
acquisition.
On the other hand, a chess-playing computer might
have to go though all possible moves, attaching points to each piece,
and finding the best move numerically. Of course it will do this very
fast, and in addition may have a complicated program that finds out
in advance which moves are worth examining. Also, it may "learn"
from the games of grandmasters. On the other hand, the learning we are
talking about is learning from scratch. It would be a very wise computer
that could learn the rules of chess from watching a few games. A child
learns language in the same way, with a few corrections from its parents.
Recognising the superiority of the human brain, researchers
have examined in detail, and are still finding out more about it. The
brain consists of thousands of millions of neurons, heavily interconnected
with one another. Each neuron consists of a cell body, from which emerges
a single axon. At the end of the axon are a multitude of branches that
just about touch other neurons. The cell body looks hairy under a microscope
because of hundreds of dendrites, the fibres that are connected to the
branches of another neuron's axon. It is sheer numbers - millions upon
millions of neurons connected to hundreds of neighbours that makes human
brains (and other animal brains) the complicated machines that they
are. A nervous signal received at the cell body is transmitted down
the axon, and to all the other cells making connections with it. This
transmission doesn't happen at random, but works on the "all or
none" principle. That is, if all the excitatory signals coming
through the dendrites reach or exceed a certain threshold value, the
axon fires. Now each connection at the dendrite, called a synapse, has
a weightage. Weightages may even be negative, or inhibitory. So if a
certain neuron fires at a threshold signal of 1, and the weightages
of the incoming signals are 0.2, 0.3, 0.6 and -0.1, the axon will fire.
Summation of signals may also be over a period of time: small impulses
coming repeatedly will also cause an axon to fire.
So what is interesting about these facts? It is that
these weightages are not fixed at birth, but change throughout life
by a process known as facilitation. Anything new that is learned, or
any new response to old stimuli is a result of the dynamic changes at
the synapses. In fact, in infants, the learning process results in the
creation of new synaptic connections. Therefore, it is important that
children are exposed to colours, noises, tastes, smells and interesting
textures to touch. This is what causes the brain to "grow".
An early experiment on kittens kept in the dark for the first few weeks
of their life showed that they turned out blind, because neural connections
were simply not formed in the absence of stimuli.
After knowing all this, the next question is - can
we duplicate this? No, you may say. But that is similar to the disbelief
that the German chemist Frederich Wohler faced when he synthesised urea
in his laboratory in 1824. Before Wohler, it was thought that all "organic"
chemicals were produced by a life force, and were therefore special.
Today, we think nothing of the millions of organic chemicals that we
use in daily life: our enzyme detergents, our synthetic fabrics, our
plastic buckets. If we can make a "brain", the possibilities
are enormous: everything that conventional computers find difficult
to do can be attempted. For example, speech recognition, signature verification,
speech synthesis, teaching - human things.
The answer is that it has been done. As early as 1958,
Frank Rosenblatt was working on the Perceptron. This machine actually
learned from experience, using Hebb's rule, reinforcing connections.
In Bell Labs, Larry Jackel and team put together 75,000 transistors,
54 simple processors connected by resistors, creating 14,000 artificial
neurons with light sensitive amorphous silicon. A picture projected
on this screen repeatedly was learnt, and then the circuits could reconstruct
the whole if only a part was shown.
There are more amazing neural networks still. The
Wizard at Brunell University can analyse TV images of human faces. It
can really tell you whether they are smiling or frowning - get any supercomputer
to do that! Most neural nets have to be taught, by a teacher who knows
all the answers. This is known as supervised learning, as opposed to
unsupervised learning, where the net just "picks up" things
on its own. Supervised learning may be reinforcement learning, like
a child is taught, where the neural net is told how well it performed,
so that the weights of input can be adjusted. Fully supervised learning
is when the teacher takes the trouble to inform the neural net what
the correct response would have been. ALVINN (Autonomous Land Vehicle
Neural Network) drives a NAVLAB vehicle through the Carnegie Mellon
University campus. It has to be taught first, with 1200 simulated images,
shown 40 times each; it takes 30 minutes to learn. After that, it can
manoeuvre the vehicle at 3.5 miles per hour - which is twice as fast
as a conventional computer could do it.
Well, are we close to an electronic brain? What would
you say? Your guess is as good as mine. Present neural networks are
slow - half as slow as a housefly, which doesn't even have a brain,
only knots of neurons known as ganglia. It will take a lot of evolution
to reach the capacity of the human brain, and our generation will await
it with mixed feelings.
© 1994-2012, Sualeh Fatehi. All rights reserved.
This article was written in 1994, and published in Express Computer,
India's leading national computer weekly, in October 1997.