SUPERINTELLIGENCE: ARE WE DOOMED BY IT?
"The development of full artificial intelligence (AI) could
spell the end of the human race," warns Stephen Hawking. Elon Musk fears
that the development of AI may be the biggest existential threat humanity
faces. Bill Gates urges people to beware of it.
AI or the field of study concerned with developing intelligent
behavior of non-human kind, i.e., of machines and software, has seen explosive
growth over the past several decades. The ever increasing power of computers
and the rigorous scientific work by researchers have brought AI within the
reach of common man. Strangely, in this dawn of AI, we hear such dire
predictions about our future.
Dread that the abominations people create will become their
masters, or their executioners, is hardly new. But voiced by a renowned
cosmologist, a Silicon Valley entrepreneur and the founder of Microsoft—hardly
Luddites—and set against the vast investment in AI by big firms like Google and
Facebook, such fears have taken on new weight. With supercomputers in every
pocket and robots looking down on every battlefield, just dismissing the
premonitions as science fiction seems like self-deception. The question is how
to worry wisely.
Today’s AI produces the semblance of intelligence through brute
number-crunching force, without great interest in approximating how minds equip
humans with autonomy, interests and desires. Computers do not yet have anything
approaching the wide, fluid ability to infer, judge and decide that is
associated with intelligence in the conventional human sense.
Yet AI is powerful enough to make a dramatic difference to human
life. It can already enhance human endeavor by complementing what people can
do. For example, a recent study has shown that an AI framework-based simulation
modeling that understands and predicts the outcomes of treatment could reduce
health care costs by over 50 percent while improving patient outcome by nearly
50 percent.
IBM’s Watson supercomputer is evaluating evidence-based cancer
treatment options using analytics to help a physician consider all related
texts, reference materials, prior cases, and latest knowledge in journals and
medical literature when treating an illness. The analysis could help physicians
determine the best options for diagnosis and treatment in a matter of seconds!
Because of the 'humanity gap' between artificial and a real
human mind, a human working in conjunction with any AI machine will always be
more powerful than AI working on its own. If, however, Stephen Hawking’s
premonition comes to truth: “It (AI) would take off on its own, and re-design
itself at an ever-increasing rate.”
Many philosophers, futurists, and AI researchers have
conjectured that human-level AI will be developed in the next 20 to 200 years.
If these predictions are correct, it raises new and sinister issues related to
our future in the age of intelligent machines. The power of atom laid dormant
throughout history until humans unleashed it in 1945. Can the current century
be the age of “intelligence explosion?”
In “Super intelligence: Paths, Dangers, Strategies” (Oxford
University Press, July 2014), Nick Bostrom, a futurist at the University of
Oxford, argues that if machine brains surpass human brains in general
intelligence, then this new super intelligence could replace humans as the
dominant life form on Earth. If that happens, it would be stealing a page from
the book of human history.
We are already seeing Google’s Deep Mind and Microsoft’s Project
Adam claiming to better human performance at image recognition tasks. Unlike
human brain, the AI machines have no physical limitation. These can be as fast
– a 604 GHz microprocessor has been reported by the University of Illinois
recently, while neurons fire at 200 Hz – and as vast – a computer can be
warehouse-sized but a human brain is enclosed in the cranium.
The critical threshold beyond which “intelligence explosion” or
singularity happens is still a theoretical conjecture and is yet to be
confirmed experimentally. There are great uncertainties about technological
progresses, particularly hardware and software bottlenecks to achieve the
“full” AI capabilities. However, it is undeniable that the future after the
creation of smarter-than-human intelligence is absolutely unpredictable.
While specific predictions regarding the consequences of super
intelligent AI vary from potential economic hardship to the complete extinction
of humankind, many futurists like Nick Bostrom agree that the issue is of
utmost importance and needs to be seriously addressed.
The hysteria around super intelligent AI or artificial super
intelligence may be overblown, according to most serious AI researchers. Those
who have dabbled in the AI technologies can appreciate the enormity of the task
in ushering a super intelligence revolution. As a practitioner of general (as
opposed to friendly) AI, I’m yet to see the first glimmers of monstrous
sentience gestating in today's code. Perhaps Gates, Hawking and Musk, by virtue
of being incredibly smart guys, know something that I don't.
We certainly don’t know how the neural processing speed or the
size of the brain is related to intelligence in a qualitative sense. Mammals
like elephants and whales have much bigger brain than humans. However, their
intelligence may be limited when compared to that of humans. We do not know the
relation in brain size to intelligence across animals, because we have no
useful measure or even definition of intelligence across animals. And these
quantities certainly do not seem to be particularly related to differences in
intelligence between people.
Bostrom claims that once we have a machine with the intelligence
of a man, super intelligence will be achieved just by making the machine faster
and bigger. However, all running faster does is to save time. If there are two
machines A and B and B runs ten times as fast as A, then A can do anything that
B can do if one is willing to wait ten times as long. Similarly, his claim that
a large gain in intelligence would necessarily entail a correspondingly large
increase in power seems far-fetched.
Some AI researchers argue that seemingly super intelligent
systems may have limited autonomy. IBM’s Deep Blue could beat the world chess
champion but it may not share the same level of intelligence as humans. Soon,
airplane auto-pilots and self-driving systems for cars will be more reliable
than human pilots and drivers. Does that mean they are more intelligent than
people? In a very narrow way, these systems are “more intelligent” than people,
but their expertise applies to a very narrow domain, and they have very little
autonomy. They can’t really go beyond the task they were designed to perform.
The difference between panic and caution is a matter of degree.
So, random, unsupported comments – yes, even from Bill Gates – can do more harm
to the psyche of the common masses when the intention is to urge caution. Certainly a general artificial intelligence
is potentially dangerous; and once we get anywhere close to it, we should use
common sense to make sure that it doesn’t get out of hand.
The programs that have potentially far-reaching capabilities,
such as those controlling the power grids or the nuclear bombs, should be
conventionally designed whose behavior is very well understood. They should be
protected from subversion by AI’s; but they have to be
protected from human sabotage anyway, and the issues of protection are not very
different. A machine should have an accessible “off” switch; and in the case of
a computer or robot that might have any tendency toward self-preservation, there
should be an off switch too that it cannot block.
Even so, one might reasonably argue that the dangers involved
are so great that we should not risk building a computer with anything close to
human intelligence. Something can always go wrong, or some foolish or malicious
person might create super intelligence with no moral sense and with control of
its own off switch. I certainly have no objection to imposing restrictions that
would halt AI research far short of human intelligence.
It is certainly worth discussing what should be done in that
direction. However, Bostrom’s claim that we have to accept those
quasi-omnipotent super intelligences are part of our future, and that our task
is to find a way to make sure that they guide themselves to moral principles
beyond the understanding of our puny intellects, does not seem to me a helpful
contribution to that discussion.
[SUBHODEV DAS]