SUPERINTELLIGENCE, Nick Bostrom

Are You Sure You Are Real?

Gerald Alper

Imagine intelligent life is finally discovered.

Imagine their civilization (likely to be far older than ours) because of Moore’s law — which is that computing power doubles every two years — is millions of times more advanced than our own.

Imagine a super programmer, infatuated with his almost god-like computer power, decided to simulate an earth-like planet like ours.

Imagine he succeeds beyond his wildest dreams: simulating the rise of life from its micro microbial origins, through billions of years of multicellular development, to the emergence of homo sapiens.

Imagine he does not stop there: incredibly he manages to simulate in exquisite detail the vicissitudes of human subjectivity. He simulates subjectivity — your feelings, thoughts, and sensations — so faithfully that no one, not even the person himself/or herself can tell!

Before you rush to judgement (as I originally did) and conclude that Nick Bostrom is an uncredentialed showboating crank trying to lead us into the land of The Matrix and The Twilight Zone — consider this;

Nick Bostrom is a hugely respected interdisciplinary Swedish philosopher who teaches at Oxford. He is the author of 200 publications. His New York Times best selling Superintelligence helped spark a global conversation about artificial intelligence.

So What Is He Telling Us?

Bostrom knows it’s a big ask, but he’s up for the task. He reminds us that cosmological questions can only be addressed from the standpoint of astronomical time scales. That given the mind-blowing, exponential growth of computing power (doubling every two years) it makes little sense to apply the cognitive and microbiological scientific constraints of today to the superpowers, of a superintelligent advanced civilization.

Remember Bostrom is saying something far more radical than speculating about the future of the earth. He is speculating about the future of the cosmos. He is speculating in particular about what we on earth could reasonably expect — from an advanced superintelligent civilization — that has millions of times the computing power that we do.

Nick Bostrom is a man of good will. A bona fide humanist. He is the founder and director of The Future of Humanity Institute. He officially believes in hope, but the picture he draws is not a pretty one. Logic, brutal logic, compels him to point out that given the infinite, or near infinite size of our universe, there are bound to be many worlds, many levels of differing cognition. In such a pluralist universe it is exceedingly unlikely that the summum bonum of computational intelligence has miraculously been attained by homo sapiens. That is like saying that in the entire cosmos Mt. Everest is the highest mountain ever seen. Nick Bostrom is fearless in spelling out the consequences of this brute cosmological fact: if it is true that there are superintelligent advanced civilizations that dwarf us in computational power, it is also true that we are thereby in danger. The danger is that there is no greater power (according to Bostrom) than computational power. If a superintelligent civilization has such a technological advantage, it is only a matter of time before they use it (simply because they can).

The dystopia Bostrom fears is not Orwellian. It is not political. It is not even human. It is existential. It is existential in the sense that it is a threat to the very core of our being. Consider, says Bostrom, how far we have already come. We now can simulate a human voice, face, body and mind. We can build driverless cars, chess machines that can humble a world champion; diagnostic machines that can outperform a world class doctor. Bostrom points to an experience machine that — borrowing the latest techniques from the burgeoning field of virtual reality — can already blur the line between imagination and reality. He asks us, in light of Moore’s law to envision a hypothetical advanced civilization that is run/guided or assisted by superintelligence machines. Is there any limit to what it could do?

He does not hedge his bets. Not only will we be incapable of determining who is human and who is not, we will be incapable of answering the foundational existential question — Who Am I? The power to simulate will be so overwhelming there will be no simple way, no existential fingerprint to address the question of all questions. So sure of this cosmological stepwise logic that Bostrom is staking the lion’s share of his prodigious interdisciplinary academic prestige on the fantastical Alice In Wonderland claim: We are possible/likely/already — if not now then soon, simulations.

The View From Behind the Couch

The Low Hanging Fruit

On a far deeper level than — to what extent would a superintelligent advanced civilization choose to flex its godlike technological muscles — is the problem of theodicy (e.g. why does an omnipotent, all-loving, merciful being allow evil, senseless suffering, satanic cruelty, unbearable pain to exist?) Of note is that the classic theological bailout — “God’s ways are mysterious” — is not available. Like most AI programmers, Nick Bostrom is a naturalist not a theologian. He may not know what the ultimate laws of physics are or will be, but he is sure they are physical. Science is the truest path to knowledge. He does not believe in the supernatural, in theology, in intelligence design. He does believe in the godlike power of an advanced superintelligent civilization. He thinks (because of Moore’s law) that there can be/or are technologically enhanced civilizations millions of times more intelligent than we are. He believes for the first time in history we can in theory ask the question; at what point does it become morally (imperative) to intervene?

So here’s a thought experiment: You are say, a Jewish programmer (you do not have to be Jewish in this particular thought experiment, but, it helps) in a superintelligent advanced civilization. You are running the simulation, for the first time, and you are in the midst of the meteoric rise to power of the young Hitler. You see instantly where this is going. Although superintelligent, you have the ordinary sensibility and social empathy that typically accompanies being highly civilized. In other words, you are not the kind of moral monster you would have to be to willingly continue the simulations. You are not the psychopath like Ed Harris in The Truman Show who desperately wants his televised creation (Jim Carey) to be forever imprisoned in the computer driven, globally televised world he had diabolically programmed

YOU INTERVENE

You do not stop there. You have been radicalized. You have realized — to your horror — that the humans you are now simulating are feeling exactly what you would feel in these circumstances. You are changed. The next time a woman is about to be raped, a child is being molested, someone trapped in a burning building is screaming in pain…

YOU INTERVENE

You stop the simulation. You are sickened at what you are seeing. You are not Dr. Frankenstein. You do have godlike technological power but you are not God. You recognize it is immoral — without the commensurate goodness and wisdom expected from a divinity — to subject a simulated human being (no less than a non-simulated human being) to such evil, senseless suffering and unbearable pain.

This does not mean that an advanced superintelligent civilization, the kind that Nick Bostrom so expertly describes can not be, seriously immoral — it can. It just seems unlikely. It highlights the improbable sensationalistic speculative science fiction-like nature of Nick Bostrom’s futuristic thesis.

Like all computer scientists, and AI theorists, Bostrom believes the essence of human intelligence is information processing. The computer, it follows, is the best model for the human mind. The more power a computer has the more intelligent it is. Double the power of a computer and you double its intelligence (Moore’s law).

Forgotten is that Moore’s law applies to machines, not to people. A human being’s intelligence does not double every two years. A human being’s knowledge, wisdom does not double every two years. On the contrary, sooner or later time, physical, biological time (courtesy of the second law of thermodynamics, entropy) decomposes the body.

Forgotten also, is the profound difference between a mind and a machine. No one has ever succeeded in building a mind. No machine has self assembled. What has over a century of psychoanalytic, depth psychological explorations revealed about the human mind/psyche? Perhaps the central theorem — recently articulated by the psychoanalyst Christopher Bollas, widely considered to be a creative genius and author of a revolutionary new book Meaning and Melancholia in the Age of Bewilderment, is simply “the mind is nothing like the brain”.

Bollas is asserting what every psychodynamically oriented psychotherapist (myself included sees over and over again): that the human mind is not a machine; that a linchpin of human intelligence is meaning making; the ability to understand as well as to process information; to think in terms of patterns rather than data sets, to discriminate rather than just compute; to be goal oriented, to think long term. Human thinking is not binary it’s multi-dimensional. It is layered. It is both rational and irrational; logical and illogical; often in conflict.

As pointed out long ago by William James emotion is omnipresent in the human mind. There is no such thing as unmotivated behavior. There is always affect, interest, bias, habit and instinct. It is driven by Darwinian biology. It is (following Hughlings Jackson) hierarchical, developmental. It is susceptible to mental illness (following Freud); to regression (following Janet); to splitting (following Charcot).

By contrast, the human mind as revealed in a clinical setting is not an artifact made in a lab by a team of engineers (e.g. DEEP BLUE) it is a biologically driven emergent product of evolution by natural selection.

The project for the psychotherapist, ironically, is the antithesis of that of the AI researcher; to forge an alliance built on trust in which an authentic self can emerge (sometimes called the true self). How different that is from a AI programmer who searches for artificial linguistic strategies that can deceive an unknowing interlocutor into believing a mindless machine can think like a human being (i.e. The Turing test).

Once, however, intelligence is viewed from the more rounded perspective of a person — and not a data-processing/expert task-performing computer — an incredibly diversified, richer, picture emerges. Almost immediately, the chasm between machine and person exponentially increases. Foremost, as mentioned, is that the computer does not understand anything it encounters. It senses but does not perceive. It does not feel anything. It has no emotions, has no history, no parents, no friends, no personal memories. No language. It can not speak. It cannot hear. It does not read. It has never had a job. It has no good, no ambitions, it has no concept of death. It is not aware, and does not need to be aware that it is not alive. It does not know the meaning of to be alive. When it crashes, as it must, it does not experience extinction. It does not experience anything, because it can not process experience. It has no culture. It does not know what the word culture means. It has memory — in the information retrieval sense — but it has no history. It has no understanding of what it means to refer to a “lived life”.

An artificially intelligent computer is not afraid of anything. It does not care and cannot differentiate between being on and off. It has no preferences. It is will less. It does what is programmed to do. By itself, it has no desire to be noticed. It does not want anything. It has no values. Nothing means anything to it. It is devoid of empathy. It has no identity — no conscience. No guilt. No doubt. No morality. No inhibitions. No sense of comedy of life. It is hard to imagine existentialism, philosophy, or religion within its reach.

Not surprisingly, as a discipline, computer scientists tend to be engineers. When Deep Blue defeated world chess champion Garry Kasparov in 1997, no one celebrated more heartily than IBM engineers. Immediately, the claim was put forth that a machine had not only demonstrated human intelligence, it had surpassed it. Conspiracy theories circulated that it was only a matter of time before superintelligence machines ruled the world.

So it might be wise to note: that Deep Blue — when it was recently asked which was bigger — a shoe or Mt. Everest — could not answer; that to date, after almost eighty years of concentrated world-wide research, the project to create a fully intelligent (in the human sense) machine has been a spectacular failure. As Roger Penrose has said, a computer may be able to crunch numbers and manipulate symbols amazingly fast but “it doesn’t understand and never will understand (because it is a machine) what it is doing”.

I think that is as true today as it was 80 years ago.

More damning is not once — in the history computer science — has a so-called artificially intelligent machine approached the complexity, depth and richness of the human mind.

From the perspective of human subjectivity, Nick Bostrom’s daring hypothesis that human beings are destined and may already be living in a Matrix-like simulation — is more science fiction and less science speculation.

Gerald Alper

Author

The Portrait of the Artist as a Young Patient

Psychodynamic Studies of the Creative Personality

His new book is

God and Therapy

What We Believe When No One is WatchingGerald Alper

Author

The Portrait of the Artist as a Young Patient

Psychodynamic Studies of the Creative Personality

His new book is

God and Therapy

What We Believe When No One is Watching

Author. Psychotherapist. Writing about psychology for all to read. I also interview scientists.