Virtually Human: The Promise-and the Peril-of Digital Immortality

Virtually Human: The Promise-and the Peril-of Digital Immortality

NOOK BookFirst Edition (eBook - First Edition)

View All Available Formats & Editions

Available on Compatible NOOK Devices and the free NOOK Apps.
WANT A NOOK?  Explore Now


Virtually Human explores what the not-too-distant future will look like when cyberconsciousness—simulation of the human brain via software and computer technology—becomes part of our daily lives. Meet Bina48, the world's most sentient robot, commissioned by Martine Rothblatt and created by Hanson Robotics. Bina48 is a nascent Mindclone of Martine's wife that can engage in conversation, answer questions, and even have spontaneous thoughts that are derived from multimedia data in a Mindfile created by the real Bina. If you're active on Twitter or Facebook, share photos through Instagram, or blogging regularly, you're already on your way to creating a Mindfile—a digital database of your thoughts, memories, feelings, and opinions that is essentially a back-up copy of your mind. Soon, this Mindfile can be made conscious with special software—Mindware—that mimics the way human brains organize information, create emotions and achieve self-awareness. This may sound like science-fiction A.I. (artificial intelligence), but the nascent technology already exists. Thousands of software engineers across the globe are working to create cyberconsciousness based on human consciousness and the Obama administration recently announced plans to invest in a decade-long Brain Activity Map project. Virtually Human is the only book to examine the ethical issues relating to cyberconsciousness and Rothblatt, with a Ph.D. in medical ethics, is uniquely qualified to lead the dialogue.

Product Details

ISBN-13: 9781466847040
Publisher: St. Martin's Press
Publication date: 09/09/2014
Sold by: Macmillan
Format: NOOK Book
Pages: 368
Sales rank: 511,254
File size: 2 MB

About the Author

MARTINE ROTHBLATT, Ph.D., MBA, J.D. is a lawyer, entrepreneur, and medical ethicist. In 1990 she founded and served as Chairman and CEO of Sirius Satellite Radio (now Sirius XM). When her daughter was diagnosed with a rare disease, Martine left Sirius to search for a cure. She founded United Therapeutics in 1996 and has since served as Chairman and CEO. Martine is also a leading legal advocate for human rights and has led the IBA in presenting the UN with a draft treaty on the genome.
MARTINE ROTHBLATT, Ph.D., MBA, J.D. is a lawyer, entrepreneur, and medical ethicist. In 1990 she founded and served as Chairman and CEO of Sirius Satellite Radio (now Sirius XM). When her daughter was diagnosed with a rare disease, Martine left Sirius to search for a cure. She founded United Therapeutics in 1996 and has since served as Chairman and CEO. Martine is also a leading legal advocate for human rights and has led the IBA in presenting the UN with a draft treaty on the genome.

RALPH STEADMAN was born in 1936. He began his career as a cartoonist, and through the years has diversified into many creative fields. Ralph collaborated with Dr Hunter S. Thompson in the birth of 'gonzo' journalism, with Fear and Loathing in Las Vegas; he has illustrated classics such as Alice in Wonderland, Treasure Island and Animal Farm, and written and illustrated his own books, which include Sigmund Freud, I Leonardo and The Big I Am. Steadman is also a printmaker, and has travelled the world's vineyards, culminating in his books The Grapes of Ralph, Untrodden Grapes and Still Life with Bottle. Steadman's recent books for Bloomsbury include his epic collection of bird illustrations, Extinct Boids.

Read an Excerpt

Virtually Human

The Promise---and the Peril---of Digital Immortality

By Martine Rothblatt, Ray Kurzweil, Ralph Steadman


Copyright © 2015 Martine Rothblatt
All rights reserved.
ISBN: 978-1-250-04691-8



The machine does not isolate man from the great problems of nature but plunges him more deeply into them.


The great innovators in the history of science had always been aware of the transparency of phenomena toward a different order of reality, of the ubiquitous presence of the ghost in the machine—even such a simple machine as a magnetic compass or a Leyden jar.


Recently I exchanged family photographs with a friend through email. Looking at the multiple generations represented in snapshots always tugs at my heart. Like any grandparent, I wonder about how my children’s and grandchildren’s lives will blossom and expand; I worry about the challenges they will face and how I might support them in getting over life’s humps. However, unlike grandparents of the past, I’m confident that my potential to stay connected to my family and subsequent generations of relatives will be available and nearly limitless.

Digital consciousness is about life and the living, because, as you will learn, digital consciousness is our consciousness. We cannot ignore the fact that thanks to strides in software and digital technology and the development of ever more sophisticated forms of artificial intelligence, you and I will be able to have an ongoing relationship with our families: exchange memories with them, talk about their hopes and dreams, and share in the delights of holidays, vacations, changing seasons, and everything else that goes with family life—both the good and the bad—long after our flesh and bones have turned to dust.

This blessing of emotional and intellectual continuity or immortality is being made possible through the development of digital clones, or mindclones: software versions of our minds, software-based alter egos, doppelgängers, mental twins. Mindclones are mindfiles used and updated by mindware that has been set to be a functionally equivalent replica of one’s mind. A mindclone is created from the thoughts, recollections, feelings, beliefs, attitudes, preferences, and values you have put into it. Mindclones will experience reality from the standpoint of whatever machine their mindware is run on. When the body of a person with a mindclone dies, the mindclone will not feel that they have personally died, although the body will be missed in the same ways amputees miss their limbs but acclimate when given an artificial replacement. In fact, the comparison suggests an apt metaphor: The mindclone is to the consciousness and spirit as the prosthetic is to an arm that has lost its hand.

Never mind about human cloning through genetic reproductive technology that supposedly creates a new “baby us” in a Petri dish, without the benefit of old-fashioned procreation “techniques.” Digital cloning will be here much faster and with few if any of the regulatory hindrances that currently prevent human genetic cloning from moving faster than a snail’s pace. Remember Dolly, the sheep created from genetic material in 1996, and the questions she raised about artificial genetic replication and humans? After Dolly, bans on similar reproductive cloning of humans were enacted in more than fifty countries. Since that time, the U.S. government has restricted federal funding of such projects. In 2002, President George W. Bush’s Council on Bioethics unanimously opposed cloning for reproductive purposes but were divided on whether cloning could be used for research; nothing has changed as of this writing. The United Nations tried to pass a global ban on human cloning in 2005, but was unsuccessful because disagreements over whether therapeutic cloning should be included in the moratorium left the matter in a stalemate.

Aside from ethical and legal obstacles, successful genetic cloning via reproductive science is also exorbitantly expensive, and prone to colossal and possibly heart-wrenching failure. Furthermore, a genetic clone of a person is not the person, just a copy of the DNA of a person. Genetic cloning does not create any part of a person’s consciousness, as, for example, identical twins do not have identical minds. Furthermore, a genetic clone of a person is not the person, just a copy of the DNA of a person. Genetic cloning does not create any part of a person’s consciousness, and, for example, identical twins do not have identical minds. Digital cloning of our own minds is an entirely different matter, albeit accompanied by considerable legal and social consideration, which I discuss in depth in this book. It is also being developed in the free market, and on the fast track. It’s not surprising. There are great financial rewards available to the people who can make game avatars respond as curiously as people. Vast wealth awaits the programming teams that create personal digital assistants with the conscientiousness and obsequiousness of a utopian worker.

As uncomfortable as it makes some—a discomfort we have to deal with—the mass marketing of a relatively simple, accessible, and affordable means for Grandma, through her mindclone, to stick around for graduations that will happen several decades from now represents the real money. There is no doubt that once digital cloning technology is fully developed, widely available, and economically accessible to “average consumers” mindclone creation will happen at the speed of our intentionality—as fast as we want it to.

Consciousness Is Key

It is in the mind that the poppy is red, that the apple is odorous, that the skylark sings.


Before we delve further into the world of mindclones, it’s essential that we come to an agreement on the definition of the thing that will make these beings our clones, and that is their ability to attain and demonstrate human consciousness. Determining a working definition of human consciousness is crucial on this journey. It is our consciousness that makes us us. The same qualities that constitute our consciousness—our memories, reasoning abilities, experiences, evolving opinions and perspectives, and emotional engagement with the world—will give rise to the digital consciousness of our mindclones, or what I will refer to as cyberconsciousness.

At birth and in early infancy there is no I and therefore no self.… The baby has instinctive urges but no sense that these urges belong to anyone.… Earliest experience, circumscribed by instinct and fear, takes on the human characteristics of I and me when an awareness of agency emerges from the fog of infant consciousness.… I have a self when I realize that I am me.… The self is comparable to painting a portrait of oneself painting a self-portrait.


The problem is, everyone—scientist and layman alike—has a slightly different concept of consciousness. Marvin Minsky, American cognitive scientist, author of The Emotion Machine, and cofounder of the Massachusetts Institute of Technology’s AI laboratory, calls “consciousness” a “suitcase word”2 in that it carries multiple legitimate meanings. Others in the field bemoan “the great variety of technical synonyms” for consciousness, and that this “perfusion of terms tends to hide underlying similarities.”3 Given the graduated fashion in which human brains have evolved and do evolve, it is likely that there are also gradations of consciousness. One common meaning of consciousness is self-awareness. But does it adequately describe the true nature of the condition?

Surely a baby’s self-awareness is different from an adolescent’s self-awareness, which is quite different from the self-awareness of a middle-aged person with their faculties intact and a quite elderly person who has lost some of their cognitive abilities. How “self-conscious” is a newborn versus an adult? I think of family photos—pictures of my parents when they were children or even of myself as a tiny boy—as evidence of loved ones who no longer exist and who, when they did, certainly had very different states of consciousness than the “final” or the most current version of the flesh-and-blood people the pictures represent.

While self-awareness is clearly an important facet of a conscious person, it’s not the only qualification. It certainly would not hold water as a definition of cyberconsciousness. In fact, a programmer can write a concise piece of self-aware software, one that examines, reports on, and even modifies itself.4 Software running a self-driving vehicle, for example, could be written to define objects in its real world including terrain (“navigate it using sensors”), programmers (“follow any orders coming in”), and the vehicle itself (“I am a robot vehicle that navigates terrain in response to programming orders”). A Google Car does these things right now, and few people would define the code it runs on, or the vehicle itself, as conscious.

Self-aware software and robotic machines don’t feel physical or emotional pain or pleasure either—they are not sentient. Most people require mental subjectivity to include emotions, that is, sentience, in order to qualify as consciousness, because recognition of how we feel is integral to human consciousness—to the “human condition.” Yet sentience still doesn’t get us where we want to be in defining consciousness, because we expect conscious beings to be independent thinkers as well as feelers.

Hence, “feelings” is not a stand-alone description of consciousness either. Physical feelings don’t require complex cognitive capability. When a hooked fish squirms, many of us would interpret it as evidence that the creature is experiencing pain, while others may consider it an autonomic response, with no accompanying emotional reaction. Many of us would also not consider the fish conscious because we don’t believe any part of its neurology is thinking about the pain, philosophizing about it, or complaining about it to others in its group. Instead, we think the fish is simply relying on noncognitive reflexes as it attempts to get out of a nasty situation. Once unhooked and back in its normal environment, the fish continues swimming as if it had never been hooked—and therefore it can easily be hooked again. Indeed, I know of a fisherman who catches and releases the same fish many times during the fishing season. The fish appears to feel the pain of getting hooked, but it never “learns” anything from the experience and no “lessons” are applied to its future aquatic adventures, displaying a lack of a certain crucial amount of self-consciousness.

Of course we humans would likewise reflexively protest being hooked, but we know we would feel the pain, swear about it, and think how to avoid it after the fact. We’d warn others against the pitfalls of the hook, passing on as much as we knew about it. Unlike a fish, we may not be so easily hooked the next time, because we internalize the original painful experience and try to avoid repeating it. We can use our brains to recognize a hook when we see one, and avoid it, as well as predict where the fisherman will stand next time he comes around, and move to another part of the lake. So clearly learning, reason, and judgment (the application of information) are also part of the consciousness equation. Autonomy, and an element of transcendence or what is thought of as our souls, is involved as well. It is in such recondite differences between fish and man that the definition of human consciousness resides.

In 1908 the deaf-and-blind pioneer Helen Keller poignantly and clearly described how human consciousness builds on communication when she said, “Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness.… Since I had no power of thought, I did not compare one mental state with another.”

In other words, while consciousness has an acceptable minimalist definition of being awake, alert, and aware—“Is he conscious?”—it also has a more salient meaning of thinking and feeling like anyone reading this book. To think like a human, then, one must also be able to make the kind of moral decisions, based on some variant of the Golden Rule, that philosophers and scientists alike, from Immanuel Kant to Carl Jung, believed are hardwired into human brains. Ask any healthy person anywhere in the world if it is wrong to hit a child over the head with a baseball bat and they will tell you it is.

Yet another complication in defining consciousness relates to our subconscious mind, which professionals refer to as our unconscious mind. There is overwhelming evidence that we are not self-aware of much of what we think and feel, and sometimes even act without thinking. As Yogi Berra summarized so brilliantly, “Think! How the hell are you gonna think and hit at the same time?”

Freud is famous for teaching that an unconscious mind, or Id, of which we are not fully self-aware is often at cross-purposes with a conscious mind, or ego, within which we autonomously reason. Modern psychology has largely distanced itself from Freudian interpretations of unconscious desires, but has accepted “the reality that the unconscious asserts its presence in every moment of our lives, when we are fully awake as well as when we are absorbed in the depths of a dream.”5 It is of course wrong to shoot someone dead over tweeting in a movie theater, but in 2014 a retired policeman did exactly that in Tampa, Florida, because his unconscious mind asserted its presence in a very bad way. President Barack Obama has described in speeches how white women reflexively grabbed their purses and moved away from him, before he was president; many of these reactions were likely unconscious responses to his skin tone.

Neither rationality, nor feelings, nor self-awareness need be present at all times for a person to be considered conscious like a human. Indeed, some level of non-reasoned, non-emotional, non-aware mental processing goes on pretty nearly at all times in the consciousness of everyone reading this book. To be humanly conscious necessarily implies an intermingled unconscious mind. As a human mind gets formed it inevitably shunts certain conceptions (generalizations and stereotypes), motivations (choose this), and decisions (avoid danger) to unconscious neural patterns, thereby providing more time and freeing up more brain power for conscious neural patterns. The same will occur with cyberconsciousness. Much of who we are is what we consciously attend to out of the unconsciously managed background.

A solution to the consciousness conundrum—too many clothes dangling out of the suitcase!—is Douglas Hofstadter’s “continuum of consciousness.” His approach declares consciousness not to be a “here or not” sort of thing, but instead to be present to a greater or lesser extent in things that demonstrate, to a greater or lesser extent, one or more of the aspects described above—self-awareness, sentience, morality, autonomy, and transcendence. In I Am a Strange Loop, Hofstadter grudgingly (owing to the nastiness of his confession) admits a scintilla of consciousness even to the mosquito. While Hofstadter doesn’t talk about the Google Car, the “continuum of consciousness” would surely grant it a mosquito’s quantum of consciousness, and perhaps a bit more, because unlike the mosquito it need not do harm to another (biting and sucking blood in the case of the insect) in order to achieve its “life purpose”: it has been driven over a million miles with no accidents. Hofstadter’s confidence in the logic of the continuum is such that he concedes to Gandhi and Albert Schweitzer a greater consciousness than to himself, because they demonstrated an exemplary conscientiousness (self-awareness, sentience, morality, autonomy, and transcendence) superior to his own by any measure.

Another way to appreciate the continuum of consciousness is to reflect that we

consider creatures to be more conscious to the extent that the decisions they make are more sophisticated, and thus less obviously preprogrammed by evolution, and to the extent that they weigh different wants and urges built in by evolution against each other. The athlete’s decision to run through pain is certainly conscious in this sense. In this sense, consciousness is graded, since evidently the athlete makes more sophisticated plans than a fish.6

People with mindclones might in fact be said to “raise their consciousness level” or “expand their mind” to the extent cyberconscious extensions of ourselves engage us in more sophisticated decision making, as dual-minded minds, and are less preprogrammed by evolution (e.g., driven by what Carl Sagan called our “reptilian urges”), even if programmed by mindware engineers. Alternatively, mindclones might be thought to be of subhuman consciousness if their decision making was rudimentary and obviously “hardwired.”

Supportive of Hofstadter’s continuum of consciousness is the July 7, 2012, “Cambridge Declaration on Consciousness,” the signing of which was so momentous that the popular television newsmagazine 60 Minutes filmed it. According to the declaration, “a prominent international group of cognitive neuroscientists, neuropharmacologists, neurophysiologists, neuroanatomists and computational neuroscientists” conclude that the “weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.” With a bit more restriction on the length of the continuum of consciousness, Francis Crick and Christof Koch agree, “a language system (of the type found in humans) is not essential for consciousness—that is, one can have the key features of consciousness without language. This is not to say that language does not enrich consciousness considerably.”7 Hence, when we are talking about consciousness in this book we do not mean any kind of consciousness, as that includes beings that chirp, bark, and oink; we are talking about human consciousness.

Therefore, the definition of humancyber consciousness needs to be personal yet concrete, and androcentric yet ascertainable. Grouping self-awareness and morality into autonomy, and sentience and transcendence into empathy, we arrive at the following definition:

Human Cyberconsciousness = A continuum of software-based human-level autonomy and empathy as determined by consensus of a small group of experts in matters of human consciousness.

Clearly, this is a humancentric definition, but it is not tautological. It is not circular because “experts in matters of human consciousness” are the ones who determine whether “human-level autonomy and empathy” are present. It is humancentric, which is in fact what we want, because, as American philosopher, writer, and cognitive scientist Daniel C. Dennett says, “Whatever else a mind is, it is supposed to be something like our mind; otherwise we wouldn’t call it a mind.”8 In other words, humanly conscious is a shorthand way of judging whether a subject thinks and feels like we do, or like other people think and feel. I can muster a certain degree of sympathy with Supreme Court Justice Potter Stewart, who, when asked to define pornography, replied, “I know it when I see it.”

By “continuum of human-level autonomy and empathy,” I am also including those kinds of independent thinking and feeling that occur subconsciously, in the unconscious mind. Cyberconsciousness software will have to provide for some quantum of unconscious conceptions, motivations, and decisions to produce a human-level mind. This is no showstopper, as code running in the background, of which a foreground information-processing unit is not “aware,” is a long-ago-mastered programming skill.

The Einstein of computing, Alan Turing, was the first to publish the idea that software was humanly conscious if it successfully passed itself off to humans as being humanly conscious. Today we call this the “Turing test.” In the words of his biographer: “To avoid philosophical discussions of what ‘mind’ or ‘thought’ or ‘free will’ were supposed to be, he favoured the idea of judging a machine’s mental capacity simply by comparing its performance with that of a human being. It was an operational definition of ‘thinking’, rather as Einstein had insisted on operational definitions of time and space in order to liberate his theory from a priori assumptions.… If a machine appeared to be doing as well as a human being, then it was doing as well as a human being.”9

Our definition of human cyberconsciousness tightens the Turing test to require that software persuade not just a single individual but a small group of experts, and not simply with regard to casual conversation but with regard to autonomy and empathy. One might criticize the Turing test by saying that, for example, he is claiming that if a wooden duck “appeared to be doing as well” as a real duck, then it was a real duck, when it fact it obviously is not. But this criticism falls flat, because Turing’s whole point is that it is function being tested,not form. If a wooden duck swims as well as a real duck, then it is a real duck swimmer. If a machine thinks as well as a human thinker, then it is a human thinker.10

Brain Snobbery—Our Human Conceit

There are plenty of people who just cannot get their heads around the idea that a computer could express consciousness the way our friend or our mom expresses it toward us—through companionship, love, laughter, compassion, empathy, and so on. Indeed, as recently as World War II, in the 1940s, the word “computer” meant nothing like what it means today. A “computer” was usually a person whose job it was to make calculations, such as someone doing mathematics for an insurance company, or, if not that, a machine that did mathematics for a person (as a washer was someone who washed clothes, but if not, a machine that did that). For example, during the Great Depression, in the 1930s, the U.S. government spent stimulus money on hundreds of computers—people, not machines—to create mathematical tables for things like artillery trajectories. The people, poor and mostly undereducated, were even represented by a “computers union” and were given easy, repetitive subcomputations to make that were ultimately combined by mathematicians into sophisticated arithmetic solutions.

In 1937 Alan Turing published a scholarly article (in a journal called Computable Numbers) about a theoretical “universal computing machine” that could do anything provided it was given the correct program. This radical notion rendered mathematically rigorous similar ideas of Charles Babbage and Ada King, Countess of Lovelace a century earlier (1837). They called their machines “difference engines” (for numerical work, which they built) and “analytical engines” (for almost any task, programmed with punched cards, which they never built but was similar to Turing’s idea). From this very slight context it took huge leaps of imagination to think that a “computer” might be something that could read, write, listen, scan, play videos, play music, diagnose medical conditions … and think … and feel. Yet this was precisely Turing’s vision, because he saw that future digital computers would be able to practically employ the same kind of logic that supported reading, writing, listening, scanning, video, music, diagnosis … and thinking … and feeling. As digital computing technology began to appear, in the 1950s and 1960s, understanding (both critical and supportive) audiences grew for Turing’s revolutionary claim. He made plain that not even human consciousness was excluded from what a computer could do in an article published under the title “Computing Machinery and Intelligence” in October 1950 (in a journal called Mind).11

Today, the public believes a computer can do just about anything (hence smartphones are called digital Swiss Army knives)—and they can surely read, write, listen, scan, play videos, play music, and diagnose conditions. Some can even move (robots), and think (within a programmed competency). Indeed, the modern layperson’s meaning for computer is something like “a category of devices that can do almost anything with information, and every year they get yet more capable.” This is a long way from “a person who does computations,” and getting rather close to Turing’s vision of “a machine that can do anything with information.” As computers begin to evidence emotions and other aspects of human consciousness, they will have completed the journey whose start and destination points are well summarized by the titles of the first and second journals that published Alan Turing’s seminal articles—from Computable Numbers to Mind. Having gone in half a century from a word meaning number-cruncher to a word meaning smart (as in smartphone), “computer,” I believe, will soon come to mean “place for artificial consciousness.” Note how each definition subsumes its predecessor: smart subsumes number crunching, and consciousness subsumes smart.

Until something starts acting in a way we recognize as human, we still have a hard time thinking it could be humanlike at some point. Computers fall under special suspicion because they both dominate our lives and remain inscrutable to most of us. Pretty pushy for a collection of wires and plastic and metal; the idea that a computer could be like us seems at once frightening and preposterous. If you still feel this way, you’re in good company.

Skeptics of software consciousness like the Nobel Prize–winning physician and physical chemist Gerald M. Edelman and the mathematical physicist and philosopher Roger Penrose say that the transcendent characteristics of human consciousness can never be codified digitally because they are too complicated, unknowable, or immeasurable. Edelman is adamant that a brain is not like a computer, and therefore a computer can never be like a brain. In Edelman’s words, “One illusion I hope to dispel is the notion that our brains are computers and that consciousness could emerge from computation.”12 Indeed, his three primary reasons that a computer (by way of computer software) could never become conscious are ultimately different ways of saying the same thing: The brain is vastly more complex than a computer.

Edelman is particularly important to this discussion because many critics of the idea that computer software could become cyberconsciousness look to his views for confirmation of their biases. As we examine Edelman’s arguments, ask yourself what will happen as computers are transformed by exponentially increasing sophistication. Even if the computer will never be like a brain, are we heading to a point when computers will perform human thought just like brains?

It is easy to get misled into thinking that because a brain is not like a computer that a computer cannot think like a mind. But it is important to remember that a computer does not have to replicate every function of a brain to support a mindclone. In an analogous vein, consider that a bird is not like a plane, but both fly. As a mentioned in the introduction, with billions of eukaryotic cells the bird is vastly more complex than a Boeing 747, which has just over six million parts. Today, planes fly farther, higher, and faster than birds. On the other hand, planes can’t stay aloft for months like swifts or frigate birds, although eventually advances in efficient and lightweight solar power and other types of battery storage systems may allow planes to stay in the air for very long periods of time. Likewise, airplanes can’t fly through small apertures or hover above a flower as hummingbirds do. However, the latest remote-controlled planes and tiny flying program-controlled surveillance equipment, drones, can do this.

It’s also crucial when thinking about this analogy to remember that for flying purposes we only want planes to provide a portion of the functionality that a bird provides. There is no prospect of planes laying eggs, nesting in trees or in the eaves of a house, or running on “fuel” in the form of fish, worms, or insects—and there is no practical or efficiency value in an airplane doing any of these things. In other words, a plane does not have to replicate a bird in every way to support safe and comfortable flight. Hence, I think it is fair to conclude that birds are to flight as brains are to consciousness.

The differences between brains and computers, or between birds and planes, beg the point. Only the military would be interested in a plane that performs the aerodynamic feats of a peregrine falcon. Most people’s interest in planes is as a method to go from city to city safely, efficiently, reliably, and in as much comfort as an airline allows these days. Similarly, most of us are not the least bit interested in a computer that can self-organize itself gradually from birth to maturity. Our intention is to provide a computer with an analogue of a human’s mind (not the brain) in one fell swoop. We are interested in a computer that thinks and feels the same as a human original mind. Edelman assumes, rather than deduces, his conclusion because he assumes that consciousness is limited to brains. Whether or not brains are computers is not dispositive of whether or not consciousness can emerge from computation.

Things that are mutually exclusive sets, such as, for argument’s sake, brains and computers, can still give rise to phenomena that are common to both sets. For example, odd numbers and even numbers are mutually exclusive sets. We can imagine odd numbers to be brains and even numbers to be computers. Yet both can give rise to Fibonacci numbers, a series of numbers where the next number is found by adding up the two numbers before it, which can also be imagined as a metaphor for consciousness. Similarly, triangles and squares are mutually exclusive sets. Yet each may be combined to form rectangles. Edelman’s error is to say that since he has seen the rectangle of consciousness formed only from neurological squares, and since computers are triangles rather than squares, the rectangle of consciousness cannot be formed from triangles. He forgets that just as there are many ways to skin a cat, and many ways for a thing to fly, there are also many ways to form the rectangle of consciousness.

Edelman claims that while “the brain does not operate by logical rules,” “a computer must receive unambiguous input signals.”13 He emphasizes that inputs to the brain are not “a piece of coded tape,” referring to an archaic method of feeding information into a computer. Surely the brain is not like a very primitive computer that runs on “coded tape.” Actually, not all computers require unambiguous input signals. Some computers have successfully driven automobiles across the country and across deserts based on a blizzard of very ambiguous input signals.14 On the other hand, some modern computers are able to parse a blurry, buzzing reality into cognizable elements with very much the same result as a human mind gets. Fuzzy logic, statistical analysis protocols, and voting among parallel processors analyzing the same ambiguous data are just three of many techniques being deployed to enable software to make sense of confusing sensory inputs: to guess, and to guess strategically.

For example, let’s take a hike. Let’s follow a forest path near BINA48’s home in Vermont, blanketed in an earth-tone palette of fallen autumn leaves. Let’s take this path side by side with a smartphone or Google Glass–based computer equipped with cyberconsciousness and software trained or programmed to find paths. As we humans walk through the forest, billions of our human neurons detect elements of millions of color, density, and geometry signals. Our eyes deliver this information to the brain at a rate of about three million bits of information per second.15 Vast networks of the neurons, in parallel, parse together the avalanche of incoming signals into patterns based upon a lifetime of experience that has taught neurons that “fire together” to “wire together.” Symphonic patterns of leaves, trees, and pathways will emerge from this cacophony of input signals. We won’t consciously be aware of every leaf on every tree (indeed, our eyes can discern detail only in the tiny foveal region away from our peripheral vision). From all the incoming data, our minds construct for us, abstract for us, by stitching together (from a “visual saccade” of the foveal detail our eyes deliver as they dart back and forth), a forest and trail. Sometimes our minds fill in almost all the detail; sometimes things get fabricated out of “whole cloth.”

Meanwhile, our cyberconscious smartphone-based partner sees the same menagerie of autumn colors, shapes, and densities. But, unlike our brains, the cyberconscious partner rapidly compares the scene to millions of stored images and determines it to be a forest. The partner then determines which part of the forest scene has a continuous zone of reduced density—a path—and directs its attention down the path. The end result of what our cyberconscious partner does, and what a biological mind does, is remarkably similar: very high-level conscious abstractions (forest, path) from a blizzard of subconscious three-megabit-per-second data. The sense of awe that I feel for each moment in a forest is no less wondrous when experienced by cyberconsciousness. If we get lost in brush, our cyberconscious partner, as reliably as the Labradoodles with whom I hike, can discern a path to the trail back out of the forest.

In each instance there has been thought, although it was achieved as differently as a bird and a plane achieve flight. In each instance, and based on my many experiences of walking through forests with friends, there was likely some level of appreciation of beauty and a sense of satisfaction with a challenge surmounted. For the cyberconscious partner this requires the original programming of values for “aesthetics,” “wonder,” and “task completion” at a high level for natural scenes, autumn woods and hiking a trail. But this is no less constructed than teaching a child that nature is cool, the forest is amazing, and completing a challenge is good. Even granting (notwithstanding contrary data from feral children) that the human settings for aesthetics, wonder, and task completion might exist endogenously, as part of the wiring of a human brain, the feelings are no less authentic simply because they were “trained into” or programmed into a software mind. An appreciation of harmony is no less worthy because it was taught rather than inborn.

Second, Edelman notes that the brain is enormously variable at its finest levels, observing that “the wrinkled cortical mantle of the brain has about thirty billion nerve cells or neurons and one million billion connections. The number of possible active pathways of such a structure far exceeds the number of elementary particles in the known universe.”16 He doubts that computers can match this variability owing to a rigorous reliance upon internal clocks, inputs, and outputs. Yet the MacBook Pro I used to write this chapter has about five hundred billion bytes of memory, with each byte being about the memory capacity of a single neuron. In other words, there is over fifteen times more neural capacity in my laptop than in my cortical mantle. Hence, just in terms of numbers of nerve cells, there is no great difference between computers and the brain. If anything, computers now have more neuron equivalents than brains have neurons, and soon will have very many more.

Now, if each JPG in the memory in my computer connected to thousands of other JPGs, and if each of those connections participated in preferred, reinforced, and naturally selected hierarchies of additional connections, I would have something like a human brain. For example, imagine that when I click on a picture of my dad, he is surrounded by automatically called-up pictures of my mom, my cousins, our houses, our vacations, and so on—and that each of them is automatically surrounded by a similar halo of images, yet of all the connections the ones that remain highlighted are ones that were selected from a large universe of possible images to be most relevant to the triggered image. Is this not how our own mind works when we browse through a photo album? We don’t remember and react to every past experience; we select and privilege some memories and connected feelings over others.

Our Consciousness, Our Software

You see things; and you say “Why?” But I dream things that never were; and I say “Why not?”


Brains are awesome relational databases. Consciousness arises from a set of connections among sensory neuron outputs, and links between such connections and sequences of higher-order connections. With each neuron able to make as many as ten thousand connections, and with 100 billion neurons, there is ample possibility for each person to have subjective experiences through idiosyncratic patterns of connectivity.

But brains need not be made solely of flesh. There are other ways to flexibly connect billions of pieces of information together. Software brains designed to run on powerful processors have reproduced the ways in which our brains give rise to the aspects of consciousness we’ve discussed: IBM’s Watson, who mastered the ambiguities of Jeopardy!; the BINA48 described in the introduction, who demonstrates empathy; Ray Kurzweil’s programs that idiosyncratically draw, compose music, and write poetry.

Many programmers, scientists, and others believe code can be written to transcend code.

It is this fresh and slightly enigmatic characteristic, especially when applied in furtherance of rationality and/or empathy, that we expect in anyone who is conscious rather than autonomic (engaging in actions or responses without conscious control). In a nutshell, people are not as predictable as machines because consciousness is not as algorithmic as a calculation. Consciousness requires idiosyncrasy, literally thoughts and actions based upon an individualized synthesis of options. Hence, “independence” does not require being a pioneer, or a leader. It does require being able to make decisions and act based on a personalized assessment rather than only on a rigid formula.

This takes us back again to the essence of consciousness, and what philosopher David Chalmers calls the “hard problem” and the “easy problem” of consciousness. The “hard problem” is figuring out how the web of molecules we call neurons gives rise to subjective feelings or qualia (individual instances of conscious subjective experience, i.e., “the redness of red”). The alternative “easy problem” is how electrons racing along neurochemistry result in complex simulations of “concrete-and-mortar” (and flesh-and-blood) reality. Or how metaphysical thoughts arise from physical matter. Basically, both the hard and the easy problems of consciousness come down to this: How is it that brains give rise to thoughts (easy problem), especially about immeasurable things (hard problem), but other parts of bodies do not? If these hard and easy questions can be answered for brain waves running on molecules, then it remains only to ask whether the answers are different for software code running on integrated circuits.

At least since the time of Isaac Newton and Gottfried Leibniz, it has been felt that some things appreciated by the mind could be measured whereas others cannot. The measurable thoughts, such as the size of a building, or the name of a friend, were imagined to take place in the brain via some exquisite micromechanical processes. Today we would draw analogies to a computer’s memory chips, processors, and peripherals. Although this is the easy problem of consciousness, we still need an actual explanation of exactly how one or more neurons save, cut, paste, and recall any word, number, scent, or image. In other words, how do neuromolecules catch and process bits of information?

Those things that cannot be measured are the hard problem. In Chalmers’s view, a being could be conscious, but not human, if they were only capable of the “easy” kind of consciousness. Such a being, called a zombie, would be robotic, without feelings, empathy, or nuances. That does not fall under our working definition of consciousness. Since the non-zombie, non-robot characteristics are also purported to be immeasurable (e.g., the redness of red or the heartache of unrequited love), Chalmers cannot see even in principle how they could ever be processed by something physical, such as neurons.

Chalmers suggests that consciousness is a mystical phenomenon that can never be explained by science. If this is the case, then one could argue that it might attach just as well to software as to neurons—or that it might not—or that it might perfuse the air we breathe and the space between the stars. If consciousness is mystical, then anything is possible. As I demonstrate here, there’s no need to go there. Perfectly mundane, empirical explanations are available to explain both the easy and the hard kinds of consciousness. These explanations work as well for neurons as they do for software.

Figure 1, Essentialists vs. Materialists, illustrates the three basic points of view regarding the source of consciousness. Essentialists believe in a biological source specific to humans. This is basically a view that in the whole universe, almost miraculously, only brains can give rise to consciousness. Materialists believe in an empirical source, namely that consciousness emerges as patterns from myriad connections among information stored either as chemical states in brain neurons or voltage states in computer chips. The philosopher Daniel Dennett is a good proponent of this view, observing in his Multiple Drafts model of consciousness back in 1991, and before, that robot consciousness is, in principle, possible.17 Note that the diagram also indicates how a person could be both an essentialist and a materialist. This would be in the overlap area of the two circles.

Edelman is a good proponent of the point of view that only brains can give rise to consciousness, but it is because of the materialistic properties of the brain as opposed to a spiritual source. Other essentialists (represented by the part of the essentialist circle that does not overlap with materialist) believe it is something other than replicable material complexity that enables brains to be conscious. A third point of view is that consciousness is part of the fabric of reality, an aspect of space-time that can mystically attach to anything. The view that God gave consciousness to Adam and Eve, or other “first humans,” is part of this third, spiritualist perspective. While mystical explanations cannot be disproved, they are unnecessary, because there is a perfectly reasonable nonmystical explanation for both the easy and hard kinds of consciousness.18

I believe that John Searle of Stanford University provides the most creative insight into categorization of philosophical approaches to the mind. Searle is famous in consciousness circles for having created a thought experiment called the Chinese room. The experiment purports to show that a conventionally programmed computer could not be conscious for the same reason that, say, Google Translate does not understand what we ask it to translate Chinese-to-English. The conventionally programmed computer is mindlessly associating each input with an output, having no internal process of subjectively caring about or contemplating what it is doing. The lights are on, but nobody’s home. This certainly fails the definitional test we set forth for cyberconsciousness above: human-level empathy and autonomy, in the judgment of human experts.

Searle broadens the definition of “materialism” to encompass subjective phenomena such as the thoughts of consciousness. He notes that these are nonspiritual and “a part of the natural ‘physical world’” but simply not tangible and quantifiable.19 Consequently, he is comfortable saying “If brains can have consciousness as an emergent property, why not other sorts of machinery?”20 This categorizes him as a materialist, for he concludes that “there is no known obstacle in principle to building an artificial machine that can be conscious and can think.”21 But he reaches his conclusion by observing that it is not very important that the neuronal or software patterns that give rise to a thought (or qualia) be objectively measurable—that we could trace with advanced MRI-type equipment the neural pathways or get a readout of the software routines. This gives materialism its due—there is something empirical to observe from a third-party perspective and measure—but diminishes the import of these neuronal (or software) measurements, since such objective materiality is just part of what makes consciousness unique. He rises above materialism by clarifying that the actual experience of the ultimate thought, or string of thoughts, is not objectively measurable, because it occurs interiorly to one’s consciousness.22 Hence, subjectivity is real (i.e., not spiritual and not limited to a miraculous human brain) even if it is not available to a third party’s measurement.23 Later in this chapter, under “Measuring the Immeasurable,” I will discuss how we can at least obtain a good-enough approximation of this subjective materialism.

If human consciousness is to arise in software we must do three things: first, explain how the easy problem is solved in neurons; second, explain how the hard problem is solved in neurons; and third, explain how the solution in neurons is replicable in information technology. The key to all three explanations is the relational-database concept. With the relational database an inquiry (or a sensory input for the brain) triggers a number of related responses. Each of these responses is, in turn, a stimulus for a further number of related responses. An output response is triggered when the strength of a stimulus, such as the number of times it was triggered, is greater than a set threshold.24

For example, there are certain neurons hardwired by our DNA to be sensitive to different wavelengths of light, and other neurons are sensitive to different phonemes, a basic unit of a language’s phonology, or sounds, which is combined with other phonemes to form meaningful words. So, suppose when looking at something red, we are repeatedly told that “it is red.” The red-sensitive neuron becomes paired with, among other neurons, the neurons that are sensitive to the different phonetics that make up the sounds “it is red.” Over time, we learn that there are many shades of red, and our neurons responsible for these varying wavelengths each become associated with words and objects that reflect the different “redness” of any one particular shade of red of the millions that exist.

The redness of red is simply (1) each person’s unique set of connections between neurons hardwired genetically from the retina to the various wavelengths we associate with different reds, and (2) the plethora of further synaptic connections we have between those hardwired neurons and neural patterns that include things that are red. If the only red thing a person ever saw was an apple, then redness to them means the red-wavelength neuron output that is part of the set of neural connections associated in their mind with an apple. Redness is not an electrical signal in our mind per se, but it is the associations of color wavelength signals with a referent in the real world. Redness is part of the multifaceted impression obtained in a second or less from the immense pattern of neural connections we have built up about red things.

After a few front lines of sensory neurons, everything else is represented in our minds as a pattern of neural connections. It is as if the sensory neurons are our alphabet. These are associated (via synapses) in a vast number of ways to form mental images of objects and actions, just as letters can be arranged into a dictionary full of words. The mental images can be strung together (many more synaptic connections) into any number of coherent (even when dreaming) sequences to form worldviews, emotions, personalities, and guides to behavior. This is just like grouping words into a limitless number of coherent sentences, paragraphs, and chapters.

Grammar for words is like the as yet poorly understood electrochemical properties of the brain that enable strengthening or weakening of waves of synaptic connections that support attentiveness, mental continuity, and characteristic thought patterns. Continuing the analogy, the self, our consciousness, is the entire book of our autonomous and empathetic lives, written with that idiosyncratic style that is unique to us. It is a book full of chapters of life phases, paragraphs of things we’ve done, and sentences reflecting streams of thought.

Neurons save, cut, paste, and recall any word, number, scent, image, sensation, or feeling no differently for the so-called hard than for the so-called easy problems of consciousness. Let’s take as our example the “hard” problem of love, or what Ray Kurzweil calls the “ultimate form of intelligence.” Robert Heinlein defines it as the feeling that another’s happiness is essential to your own. Neurons save the subject of someone’s love as a collection of outputs from hardwired sensory neurons tuned to the subject’s shapes, colors, scents, phonetics, and/or textures. These outputs come from the front-line neurons that emit a signal only when they receive a signal of a particular contour, light wave, pheromone, sound wave, or tactile sensation. The set of outputs that describes the subject of our love is a stable thought; once established as part of the set with some units of neurochemical strength, any one of the triggering sensory neurons can trigger other sensory neurons. These neurons paste thoughts together with matrices of synaptic connections.

The constellation of sensory neuron outputs that is the thought of the subject of our love is, itself, connected to a vast array of additional thoughts, each grounded directly or, via other thoughts, indirectly to sensory neurons. Those other thoughts would include the many cues that lead us to love someone or something. There may be resemblance in appearance or behavior to some previously favored person or thing, or logical connection to some preferred entity. As we spend more time with the subject of our love, we further strengthen sensory connections with additional and robust synaptic associations, such as those connected with eroticism, mutuality, endorphins, and adrenaline.

There is no neuron with our lover’s face on it. There are instead a vast number of neurons that, as a stable set of connections, represent our lover. The connections are stable because they are important to us. When things are important to us, we concentrate on them, and as we do, the brain increases the neurochemical strengths of their neural connections. Many things are unimportant to us, or become so. For these things the neurochemical linkages become weaker, and finally the thought dissipates like an abandoned spiderweb. Neurons cut unused and unimportant thoughts by weakening the neurochemical strengths of their connections. Often a vestigial connection is retained, capable of being triggered by a concentrated retracing of its path of creation, starting with the sensory neurons that anchor it.

That means that the so-called hard problem of consciousness isn’t so hard after all. Crick and Koch astutely observe that there is nothing new about complex and experiential phenomena arising from a multiplicity of adroitly connected inanimate pieces. The subjectivity of our perceptions (redness of red from an array of neurons) is no more difficult to explain than “the ‘livingness’ of living things (such as bacteria, for example) from an array of ‘dead’ molecules.” The DNA discoverer and his neuroscience collaborator conclude that

meaning derives both from the correlated firing … and from the linkages to related representations. For example, neurons related to a certain face might be connected to ones expressing the name of the person whose face it is, and to others for her voice, memories involving her and so on, in a vast associational network, similar to a dictionary or a relationship database.25

Wholes can transcend the dimensionality of pieces when the pieces are combined right. Subjectivity is simply each person’s unique way of connecting the higher-order neuron patterns that come after the sensory neurons. Think of subjectivity as the volume on your music player. The more you value the sensation, the memory, the feeling, the idea, or the person, the louder you want to play it, the better headset you search for to listen to it, the tighter you close your eyes as you savor the sounds. The hard problem of consciousness is the idiosyncratic settings of amplitudes to the patterns of connections in our minds. The easy problem of consciousness is solved in the recognition of sensory neurons as empirical scaffolding upon which can be built a skyscraper’s worth of thoughts. If it can be accepted that sensory neurons can as a group define a higher-order concept, and that such higher-order concepts can as a group define yet higher-order concepts, then the easy problem of consciousness is solved. Material neurons can hold nonmaterial thoughts because the neurons are linked members of a cognitive code. It is the metamaterial pattern of the neural connections, not the neurons themselves, that contains nonmaterial thoughts. The vogue term for this metamaterial pattern is the human “connectome.” Hence, neuroscientists today feel comfortable saying “We are our connectome.”26

Lastly, there is the question of whether there is something essential about the way neurons form into content-bearing patterns, or whether the same feat could be accomplished with software. The strengths of neuronal couplings of the brain can be replicated in software by writing code that weights various strengths for software couplings in relational databases. Weighting software couplings means making certain decisions more likely than others. For example, in the formula x = 5y, the value of x is five times the weight of y. If thoughts of weight x would arise five times as often, or have five times the importance, as thoughts with the weight of y, then x = 5y.

William Sims Bainbridge of the U.S. National Science Foundation is a leading expert on software coding of personality attributes. He has managed the validation across thousands of people of personality-capture surveys such as the Big Five (a mainstay of psychologists consisting of twenty Likert-type-scale questions relating to each of extraversion, agreeableness, conscientiousness, emotional stability, and imagination)27 and the 16PF (assesses sixteen dimensions of personality—warmth, reasoning, emotional stability, dominance, liveliness, rule-consciousness, social boldness, sensitivity, vigilance, abstractedness, privateness, apprehension, openness to change, self-reliance, perfectionism, and tension).28

Bainbridge has created and validated his own personality-capture system based upon more than one hundred thousand questions.29 Each question is weighted two-dimensionally, by relative importance of the personality attribute to the person and relative degree of applicability to the person. This book’s hypothesis is that a quantification of a person’s mindfile using Bainbridge’s two-dimensionally weighted hundred thousand personality-capture questions will produce a software-based personality that responds to life as would the original person. The actual assessment across the hundred thousand personality-capture questions would be done automatically by mindware reviewing a person’s mindfile. Mindware would then use the results of the assessment to establish the personality settings for a mindclone.

The connectivity of one neuron to up to ten thousand other neurons in the brain can be replicated in software by linking one software input to up to ten thousand software outputs. Probability-based weightings, such as using the statistical methods of Bayesian networks, will also help mindware mirror human thought processes. The ability of neuronal patterns to maintain themselves in waves of constancy, such as in personality or concentration, could equally well be accomplished with software programs that were written to keep specific software groupings active, such as performing a complicated calculation over and over.

Finally, a software system can be provided with every kind of sensory input (audio, video, scent, taste, tactile). Putting it all together, Daniel Dennett observes, “If the self is ‘just’ the Center of Narrative Gravity, and if all the phenomena of human consciousness are explicable as ‘just’ the activities of a virtual machine realized in the astronomically adjustable connections of a human brain, then, in principle, a suitably ‘programmed’ robot, with a silicon-based computer brain, would be conscious, would have a self. More aptly, there would be a conscious self whose body was the robot and whose brain was the computer.”30

At least for a materialist, there seems to be nothing essential to neurons, in terms of creating consciousness, that could not be achieved as well with software. The quotation marks around “just” in the quote from Dennett is the famous philosopher’s facetious smile. He is saying with each “just” that there is nothing to belittle about such a great feat of connectivity and patterning.

Indeed, Dennett is following the path Alan Turing blazed a half century earlier:

that whatever a brain did, it did by virtue of its structure as a logical system, and not because it was inside a person’s head or because it was a spongy tissue made up of a particular kind of biological cell formation. And if this were so, then its logical structure could just as well be represented in some other medium, embodied by some other physical machinery. It was a materialist view of mind, but one that did not confuse logical patterns and relations with physical substances and things as so often people did.31

Turing was perhaps the first to truly appreciate that minds or psychology just happened to function by way of the same kind of discrete logical systems as did ideal computers. (No doubt this insight flowed from his brilliance in computer science, including building the Enigma computer, which cracked the Nazi secret codes and helped win World War II.) Therefore the function of a mind—human consciousness—could in fact be replicated in an appropriate computer. “It was not a reduction, but an attempt at transference, when he imagined embodying such systems in an artificial ‘brain.’”32

The Limitlessness of Software

Is there something physically about computers and software that cannot replicate the connectivity feature of the human mind? No. Already, the number of possible arrangements of the 500 billion bytes of memory in my computer far exceeds the number of elementary particles in the universe. This kind of a statistic shared by Edelman is not beyond the complexity potential of computers. Mindware consciousness is achievable because our human thoughts and emotions are patterns among symbols. These patterns can be the same whether the symbols are encoded in our brains or in our mindfiles. The patterns and connections are so complex that today only certain threads are available as software, but every year the range of symbol association achievable by software leaps forward. For example, software that thinks how to get from our house to a new restaurant is now common, but didn’t exist just a decade ago.

There is no a priori limit on the number of connections between the software equivalent of nerve cells. Top websites, for example, routinely have tens of thousands of active connections—other websites pointing to them. But it is also true that so far nobody has written software that enables a database with anywhere near a thousand million connections between fields, as exist in the brain. Work in this direction is proceeding rapidly, however, and traditional relational-database systems are becoming more agile and more interconnected with the vast, low-cost resources of cloud computing.

Nevertheless, there is actually no need to argue over the relevance of information-technology development to the extraordinarily rich milieu of cerebral neurology. Instead, we can grant Edelman’s point that the brain is vastly more complex, variable, self-organizing, unpredictable, and dynamic than any computer. We materialists all readily concede that human consciousness arises from the unfathomably complex neural circuitry of the human brain—from this “marvelous matter underlying the mind [that] is like no other.”33 It does not follow, however, that human consciousness cannot arise from any other substrate.

As I’ve noted, flight surely does arise from the exquisitely complex biochemistry and neuromuscular physiology of birds. Yet does that mean flight cannot arise from our vastly simpler airplanes, helicopters, jets, drones, and remotely piloted vehicles? We know that it can. Nature is full of examples of separate evolutionary development of comparable functionality—some pathways more simple than others. Unnecessarily complex or elegantly designed machines can accomplish many tasks just the same. The eye has evolved at least fifty different times in the last few hundred million years, developing a wide range of adaptations to meet the needs of the beings that have them: night vision, color vision, binocular vision, eagle vision, infrared vision. For example, the eye started as a simple photosensitive pigment, essentially a yes-or-no relay of the presence of light, insufficient for vision. During what is called the Cambrian explosion, a time of accelerated evolution about 542 million years ago, eye structure and functionality increased in many species to include image processing and detection of light direction. Lenses evolved later and at different times in different species, according to their various evolutionary needs. That a structure (such as the human brain) results in a process (such as consciousness) does not say anything as to whether something very similar to that process (such as cyberconsciousness) could be achieved with a different structure (mindfiles, mindware, and mindclones).

Edelman makes a tour-de-force showing that the brain is complex enough to account for any degree of subjective thought. He shows that it does so through variance and selection on a microscopic scale writ numerically very large (what he calls neural Darwinism), not via a priori design and logic. Edelman is persuasive in his assessment that the brain’s functionality arises from a design principle more like the strengthening and adaptive capabilities of our immune system than like the architecture of a computer network. That is, when the immune system is challenged with things it doesn’t recognize, either it adapts or the person dies. Over the course of human evolution, those who had, for example, immune systems that adapted—e.g., memory T cells, helper T cells—lived, and those who lacked them did not pass on those genes. As such, Edelman demolishes the arguments that consciousness requires explanations from quantum physics (much less metaphysics). On the other hand, he doesn’t touch the question of whether consciousness could also arise from relational mindfile databases and recurrent mindware algorithms.34

In other words, Edelman is adamant that “all basketball stars are tall” (i.e., that any neural circuitry as nonlinearly complex as a human brain—the “basketball star”—will be conscious, or “tall”), but he cannot logically argue that, “everyone tall is a basketball star” (i.e., that all consciousness must be brain-based).

Finally, Edelman observes that the brain is not a computer because a brain is defined by a genetic code far more limited than the amount of neural variability that results in a mature brain. But computer software need not be predetermined either. My computer came with Microsoft Office, but I have written countless pages that far transcend the amount of specificity represented by even that leviathan software package. In general, operating systems for computers, like genetic codes for people, can create an almost unlimited amount of variability.

In addition, there are Darwinian algorithms that allow software code to self-assemble analogously to how neural connections self-assemble. Code that self-assembles usefully enough to be in frequent demand (like neural connections that fire and wire together) gets replicated more frequently (like preferred neural pathways for frequent thoughts or behaviors). A great example of this is the massively retweeted one-liner or image, or the overly sampled music riff, or the endlessly viewed, liked, and shared video clip. All of these codes get recombined into larger assemblies of code in a Darwinian process of self-replication (with humans or robot servers functioning as natural selection). The same process also occurs with code for voice recognition, spatial navigation, and chatbotting (or automated online conversational agents)—but hackers cut-and-paste this code. Why do you think so many websites make us decipher strange-looking strings of letters and numbers to prove we are not computer robominds? It is because robot code self-assemblers, also called web crawlers, are already in our midst.

“Hackers,” by the way, should not be confused with “crackers.” Hackers are honest people who are passionate about computer programming and have an ethic that encourages very liberal sharing of the software code they create. Crackers are programmers who try to break through computer security systems.35

Ultimately, Edelman is a biological essentialist, rejecting the idea that our minds can have any existence detached from their underlying brains, and human brains in particular. Because he believes minds arise from neural Darwinism—selection due to competition among vast numbers of neurons—he cannot imagine that a mind could arise from something so predetermined-sounding as a “computer program.” On the other hand, he readily admits that much of the knowledge that arises from our minds is beyond scientific analysis, creating wide parallel playing fields for the humanities.36 I believe that, if pressed, he would concede that the mind could create a software mind, but that it would not think very much like a mind that arose organically via neural Darwinism.37

Yes, the brain is complex: some 100 billion neurons, so densely packed that a quarter million of them fit in the size of this ¦, with many if not most of them sprouting thousands of microscopic ultra-spindly connections to other neurons, all in the depth and surface area equivalent to the front of a T-shirt.38 Wow. But information technology is also complex. An array of microchips can have some 100 billion components, so densely packed that millions of them fit in the size of this ¦, with many if not most of those components sprouting ultra-spindly (nanometer-width) connections to other components. Wow, again. And while brains must be of a size that can be birthed through a vaginal canal, information technology can be spread across thousands of square feet of servers to enable tens of thousands of connections among discrete integrated circuits, with the complexity of such processing delivered wirelessly. I believe that Edelman and other biological essentialists have done a bang-up job of proving that of all the flora and fauna on this planet, only human brains could have come up with mindcloning substrate as structurally complex in its own way as human brains are in theirs. But now that this has occurred, and continues to rapidly advance, it is a reasonable proposition that this artificial complexity can give rise to a mind as surely as does our biological complexity.

Measuring the Immeasurable

The legal system long ago solved the issue of consciousness practically with its creation of a jury of one’s peers. Society is accustomed to letting others make determinative decisions about the mental states of individuals. For instance, someone is guilty of an intentional crime if other people (the jury) think they had the mental intent to commit the crime (as well as performing the criminal acts). Other times, a jury may decide someone is not culpable because of a diminished mental state, or a reduced consciousness. These people are treated very differently than the conscious criminal, and rightly so. Likewise, a team of medical and sometimes religious experts (depending on your point of view and belief system) will often assist in determining the state of consciousness of a person in a so-called “vegetative state” when helping a family make decisions about a loved one’s care.

Importantly, there has recently been a sea change from neglect to acceptance of the scientific study of consciousness. One group of scientists now feels that the testimony of a purportedly conscious subject is good enough for measuring consciousness as an experimental variable. For example, Bernard Baars concludes:

In sum, behavior reports of conscious experience have proved to be quite reliable. Although more direct measures are desirable [such as neuronal correlates of consciousness gleaned from brain scans], reportability provides a useful public criterion for brain studies of consciousness in humans and some animals.39

In other words, cognitive science is beginning to accept that we can scientifically determine whether or not someone is conscious based on our assessment of the subject’s own reports of their consciousness. I sometimes hear the fear that mindclones could fool us, and that a very clever robot that was artificially intelligent, but not conscious, and hence what some philosophers would call a zombie, might fake their way into human rights and citizenship. But this fear is misplaced, because cognitive scientists have gained adequate confidence in their ability to not only judge reports of conscious experience, but to also adduce the extent to which they conform to actual subjective experiences (i.e., those inside the heads of the subject that make them a person instead of a zombie). Baars continues:

For scientific purposes we prefer to use public reports of conscious experiences. But there is generally such a close correlation between objective reports and the subjective experiences they refer to, that for all intents and purposes we can talk of phenomenology, of consciousness as experienced. Thus in modern science we are practicing a kind of verifiable phenomenology.

If a mindclone phenomenologically appears to be conscious, then she or he very probably is a conscious being with a subjective perception of the world. This is very much what Alan Turing predicted in his classic 1950 paper Computing Machinery and Intelligence. As noted above, philosopher of consciousness John Searle has articulated why there is no such thing as a fully objective determination of consciousness. It is a subjective state, albeit one that occurs in the real world. It just can’t be measured by objective third persons because it is by definition a first person’s interior experience. Searle notes that “Behavior is important to the study of consciousness only to the extent that we take the behavior as an expression of, as an effect of, the inner conscious process.”40 This is precisely what the judicial system, and expert assessor processes, are designed to do.

Thus, it is sensible to also let society appoint experts and make similar decisions as to whether or not someone or something is conscious for purposes outside of the criminal justice or medical care system. For the determination of consciousness, a consensus of three or more experts in the field, such as psychologists or ethicists, substitute for a jury or a team of medical and spiritual ethicists. It is likely that professional associations will offer Certifications in Mindclone Psychology (C.M.P.) to better measure and standardize cyberconsciousness determinations. These professional associations will then become among the best friends of the mindclones, just as the American Psychiatric Association became a great friend of the gay movement by reversing itself and admitting that homosexuality, bisexuality, and transsexuality were not deviant human behaviors compared with heterosexuality but different behaviors—and therefore psychologically and physiologically normal in humans.

Of course an expert judgment of consciousness is not the same thing as a fully objective determination of consciousness. After all, juries can and have gotten it wrong; they have deemed defendants as lacking in criminal intent whereas, in fact, they most certainly had such malevolent intent. Doctors have determined patients to be irreversibly comatose only to see these people “wake up” and inquire about their favorite team’s playoff standings or some other benign, everyday event. However, when objective determinations are impossible, society readily accepts the wisdom of alternative appraisals of peers or experts, and accepts as inevitable that errors in judgment will sometimes occur. There is nothing that can mitigate all the risks inherent in being human.

It comes down to this: “immeasurables” are part of life, perhaps the most succulent. We do not shy away from love because we can’t measure it rationally, nor demur from enjoying art and music because quadratic equations can’t explain their appeal. And so it is with consciousness: if others, especially experts in mental health, see so much of themselves in a mindclone as to say “that one is human,” then that one is human. Guilty as charged.

In 1950 Alan Turing observed that since there was no way of telling that other people were “thinking” or “conscious” except by a process of comparison with oneself, the same process would have to apply to allegedly conscious computers. He concluded:

“Can machines think?” I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.41

When most people I know today are waiting for their handheld computer or smartphone to provide a search result, or turn-by-turn driving instructions, they have said out loud “it is thinking about it.” Turing was right.

The Science That Isn’t Fiction

[In 2004] the world’s most advanced robotic cars struggled to make their way around even basic obstacles such as large rocks and potholes in the road. Despite millions of dollars’ worth of high-tech equipment, the vehicles managed to mimic little of what a human can do behind the wheel. Now, however, they can squeeze into parking places, flip on their indicators before making turns and even display the flair of a London taxi driver when merging into traffic.


We are close enough to cyberconsciousness to feel the bits and bytes of cyberbreath on our cheeks. Because we take advancements in technology in our stride, there is a certain amount of expectation and entitlement about it. (How often have you expressed frustration at not getting a Google search result in less than a few seconds?) To realize how narrow the leap between our online personas and mindclones is, it is necessary to understand the exponential nature of advances in information technology. Perhaps this knowledge will make you more conscious of what sorts of digital remains you disperse across the digital universe.

Pattern-recognition expert Ray Kurzweil has shown in his bestseller The Age of Spiritual Machines that information technology has been doubling its capabilities every one to two years since at least the 1950s, if not long before. For example, we have more computing power in a hundred-dollar cell phone today than there was in the $30 billion Apollo spacecraft that went to the Moon in the 1960s and 1970s. While voice-recognition technology was nonexistent in the 1990s, only ten years later it was a free feature in smart technology. Ray Kurzweil and others have shown that based upon the doubling rate of information technology it is reasonable to expect mindclones for about $1,000 by the end of the 2020s, and sooner than that for a higher price. Like most technological wonders, it won’t take long for the price to come down as the technology improves and demand grows. When Sharp and Sony introduced flat-screen televisions, in 1997, prices topped $15,000, out of reach for most consumers. Now anyone can buy one on Amazon or at Walmart for a fraction of that price.

Figure 2, per Kurzweil’s chart, compares the information-processing capability (measured in calculations per second per thousand dollars) available at various dates (the black dots) to biological life-forms that have the equivalent information-processing capability (in calculations per second). Up through 2010 we see computer programs no smarter than a mouse or a bug, and are not impressed. However, to base the assumtion that mindclones are very far in the future on this information is misleading in two ways.

First, there are a great many application-specific software packages that are already far smarter than even very clever humans. For example, the mapping software in your cell phone can best any of us at finding addresses in unknown neighborhoods. Gaming software, from grandmaster chess to the subtle Asian game of Go, and including all manner of virtual-world environments, far outstrips the conceptual capabilities of ordinary humans. While no software package has “put it all together” the way a human mind does, programs are popping into existence with great competence in many of the areas to which we devote our mental skills. How long could it be until some software “puts it all together”? Per Kurzweil’s chart,43 no longer than the 2020 time frame in which computers will have as many processors as the human brain has neurons.44 From there, he estimates that it will take about another decade, until the 2030s, until people will routinely interact with software-based consciousness that is convincingly human.45

There is a second reason, born of psychology, that we find it hard to realize how near-term mindclones are. This reason relates to the difference between our natural way of perceiving things, which is linear, and the way in which information technology is advancing, which is exponential. Linear things proceed the way children grow, perhaps half a foot or so a year until they reach a plateau height. This is the “linear” way we have evolved to perceive the relationship between changing things and the time it takes them to change forms. Things that change the way people grow, linearly, change about the same amount each year or so. Hence, if there is a millionfold deficit between the processing capability of a typical computer today and that of a human mind, it is natural for us to project the arrival of mindclones as equal in years to about one million divided by the increase in processor speed we can get on our computers from one year to the next.

For example, my one-year-old very good computer has about 1/100,000 of the information-processing capability of a human brain (its processing speed is about that tiny fraction of the number of neural connections in a human brain, although its software is in some areas pretty advanced). In other words, it has only .001 percent of the capability of a human brain. It’s not even a rodent in terms of a brain’s ability to make connections between data that lead to entirely new ideas and insights—despite the fact that my MacBook Pro has, as I said earlier, about five hundred billion bytes of memory. There may be fifteen times more neural capacity in my laptop than in my cortical mantle, but my computer still is not at the point where it can form original ideas or have spontaneous epiphanies.

I could go buy a new computer today that has 2/100,000 or .002 percent of the capability of a human brain. At this rate, with the way my linear mind works, I would expect to wait about 99,998 years to buy a mindclone. What, me worry! Our linear minds take our most recent experience—such as going from a 1/100,000-of-a-human-mind computer to a 2/100,000-of-a-human-mind computer in one year—and extrapolate it forward such that we think it will take 998 more years to get 1 percent of a human mind, another thousand years to get to 2 percent of a human mind, another thousand years to get to 3 percent of a human mind, and so on.

In fact, though, information technology does not grow linearly, but exponentially. This means, according to the generalized form of Moore’s law, that information technology doubles every one to two years—something very different from growing linearly.46 Because computer capability doubles, next year I will get not a 3/100,000-of-a-human-brain computer, but a 4/100,000-of-a-human-brain one. Exponential growth means the year after that I will get not a 5/100,000-of-a-human-brain computer, but an 8/100,000-of-a-human-brain one. With information technology, I can expect to reach mindclone computing as rapidly as this:

Four clarifying points are in order. First, it is unlikely that matching the number of human-brain neurons with the number of processors will be necessary to replicate the mind. We have all heard the estimate that people use only about 10 percent of their brain’s potential. If we grow up speaking five languages, we always will. If not, we won’t. People who have even half their brain removed surgically function relatively normally. So, we may already be at 10 percent, 20 percent, or more of the information technology we need to create a mindclone. The estimates above are as arguably conservative as they are arguably optimistic.

Second, the numbers and dates in the above example are approximations. Rounding down from 1,024 to 1,000 in the ninth year, for example, is just to make the arithmetic easier to follow. Similarly, while Moore’s law says that the doubling occurs every one to two years, for clarity say it doubles every year. The effect of making it every two years would simply be to postpone mindclones to thirty-two years from now instead of sixteen; it will be twenty-four years from now if we use a doubling period of every eighteen months. The important point is that mindclones are around the decade corner—not in some other millennium, century, or even generation. This is about our lives.

Third, some people question how long Moore’s law can continue to hold true, noting that other exponential phenomena—such as the growth of bacteria in a Petri dish—end when the room for growth runs out. In fact, because knowledge (unlike bacteria) can grow without limit, the doubling of information technology is not limited. Knowledge is the only resource that the more you exploit it, the more you have to exploit. Engineers have already designed the pathways for the growth described by Moore’s law to continue for many decades. For example, when technology limits are reached with flat integrated circuits, computers will shift to three-dimensional integrated circuits. The technology already exists. Three-dimensional circuits stack separate chips in a single module, known as system in package (SiP) or chip stack multichip model (MCM). Intel introduced such a 3D version of its Pentium 4 CPU in 2004, and in 2007 the company introduced an experimental version of an eighty-core design with stacked or 3D memory.

Fourth and finally, we need to discuss the difference between the computational capability of the human brain and the software capability of the human mind. Having the processing speed of our brains is not equivalent to having the wiring necessary to re-create our minds. Our minds are dependent on something very close to ten million billion (1016) neural connections, which we know because animals with fewer do not have our kind of minds. But our minds are just as dependent on these neural connections having a predilection or “connectome” (i.e., software like mindware) to be cross-associated in ways that give rise to characteristically human thoughts, emotions, and reactions.

Clever people throughout the world have repeatedly succeeded in creating software that gets the most out of hardware; great software transcends the performance expectations of even the best hardware, much as human ingenuity transcended the natural utility of sticks and stones. We landed men on the Moon with ten thousand lines of software code. Today’s laptop operating systems let us simultaneously watch video, listen to music, browse the web, email with friends, instant-message, write documents, and manipulate spreadsheets with around a hundred million lines of code. With thousands of experts worldwide working today at reverse-engineering the human mind, I feel confident these efforts will produce mindware when supportable with the necessary capability in hardware processors.

I would be skeptical if I thought the human mind were an impossible or hopelessly difficult machine to replicate. But it is not. It is breathtakingly proficient at associating things and parts of things with other things, including feelings; at building up real-time models of the outside world; and at self-organizing a continual, reasonably interacting sense of self. Yet these are tractable problems. As the inventor of cybernetics (from which came the “cyber” in “cyberspace”), mathematician Norbert Wiener, said, “if we can do anything in a clear and intelligible way, we can do it by machine.”47 The temptation to create mindware will entreat the greatest neuroscientists and software engineers on the planet. Working together, going back and forth between human models and portions of draft mindware code, we can expect mindclone software (human thought) to arise when the necessary hardware (processing speed and memory) is available.

It may be later than the 2020s (if, for example, we insist on every nuance of human emotion to be present before we deem a mindclone humanly conscious) or it may be sooner than the 2030s (if, for example, we can use lesser computational speed more efficiently than the brain does to create human personality). The only case for mindware not occurring at all is if we believe human thought to be in the realm of spirit, something beyond artificial replication. If you replicate the information, such as a molecular structure or a pattern of neural connections, then you replicate the thing. In law there is a doctrine called res ipsa loquitur—the thing speaks for itself. This means, for example, that if the gun is smoking, then a bullet was shot. Perhaps in life there is a law called indicium ipsum loquitur—the information speaks for itself. If a replicated mind sacrifices for another’s happiness, then one kind of love is shown. Like someone getting up in the middle of the night because they’re cold—and putting a blanket over someone else in the house—the loving mindclone will recognize a truth and act on it for the benefit of another being.

So we delude ourselves that cyberconsciousness and mindclones are in the distant future because our linear minds have great difficulty projecting exponential phenomena. In fact, mindclones are probably as close to us in time as the birth of punk rock and Apple Computer. The very same revolution that

¦ brought cell phones from almost no one’s hands to almost everyone’s hands in under twenty years, and

¦ brought the internet from a military toy to a universal joy in under fifteen years,

will likely bring mindclones from chatbot infancy to human simulacra in the time it will take to get today’s toddlers into college or today’s Millennials into a career.

*   *   *

AS OF 2014, the most popular graduate-level course at Stanford relates to neuromorphic programming—which is about using software to seek out information from the environment (e.g., mindfiles and Big Data), and to learn from such information how to best achieve a set of goals (e.g., mindware and what people naturally do). “That reflects the zeitgeist,” said Terrence “Terry” Sejnowski, a computational neuroscientist at the Salk Institute, who pioneered biologically inspired algorithms. “Everyone knows there is something big happening and they’re trying to find out what it is.”48

I realize that until people present their mindclones as doppelgängers, and persuade others that their doubles dream and pray like humans, the consciousness/cyberconsciousness debate will rage on. The moment is close. Compared with the glacial pace of the biological or “natural” form of Darwinian evolution that took over three billion years to achieve, cyberlife or cyberconsciousness will arise in a heartbeat, because, as I’ve shown, the key elements of consciousness—such as autonomy and empathy—are amenable to software coding. And the coding itself, as I’ve also discussed, is happening at a rapid pace.

Thousands of software engineers are working to advance cyberconsciousness. The U.S. government is following up on the Human Genome Project with a Brain Activity Map, a decade-long effort meant to chart the activity of the brain’s billions of neurons in hope of gaining greater insights into perception, actions, and, ultimately, consciousness. There are stirrings everywhere, every day in the news of the progress that artificial intelligence and software consciousness are making—if you look. In 2012, Google researchers enabled a machine-learning algorithm, called a neural network, to perform an identification task without human supervision or guidance: it trained itself to recognize cats by scanning a database of ten million images. Seems rudimentary, but this is the sort of thing that is turning computer science on its head—and it does have practical use. A year later, Google used the same neural-network techniques to create a search service that helps people find specific photographs among millions.49

As I discuss in the next chapter, it is merely a matter of decades before symbol-association software achieves the complexity of human thought and emotion (i.e., mindware) and converges with the information billions of people are already engaged in compiling (i.e., mindfiles) to form software brains (i.e., mindclones). This is real intelligent design.

Copyright © 2014 by Martine Rothblatt


Excerpted from Virtually Human by Martine Rothblatt, Ray Kurzweil, Ralph Steadman. Copyright © 2015 Martine Rothblatt. Excerpted by permission of Picador.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Customer Reviews

Most Helpful Customer Reviews

See All Customer Reviews

Virtually Human: The Promise--and the Peril--of Digital Immortality 4 out of 5 based on 0 ratings. 2 reviews.
Anonymous More than 1 year ago
Anonymous More than 1 year ago