Monday 28 May 2012

Essential identity: Cogito ergo sum



If there's one part of us that we consider to be the seat of our identities, it is our brains.

Suppose one day we have the technology to perform brain transplants safely. We might find patients who for some reason or other wish to undergo the procedure. For example, suppose two transgender individuals of opposite physical sexes wish to exchange bodies so that they can truly feel that their physical bodies match their mental genders.

If we ever managed to perform such a procedure, which legal identity would each person inherit? I presume that the identities would follow the brains rather than the bodies. This is because each of us recognises that it is the brain and the mind it hosts that define who we are.

Brains are necessarily unique. Each brain must encode a unique set of experiences. Sure, the atoms that make up the neurons might be cycled in and out, but the neurons themselves remain. As such, they seem to be a good candidate for the seat of identity.

But each brain, no matter how complex in aggregate, is made of relatively simple parts which can be understood in isolation and reproduced. We can imagine a future when parts of the brain might be replaced by artificial analogues which function in precisely the same way. This is not as far-fetched as it might seem.

Consider a hypothetical whole brain prosthesis. Suppose we digitally image the brain of a person with a degenerative neuronal disease and have the technology to produce a precise electronic equivalent. We then need to consider whether it is ethical to perform the surgery needed to replace the defective organic brain with a synthetic artificial one.

Most of us would have no major existential qualms about heart transplants. Our only concern is whether or not such procedures will have a positive effect on our health. Brain transplants are far more troubling, and this all comes down to the association of identity with brain.

If we performed such a brain transplant, our intuitive notions of identity would suggest we have in fact killed the patient and replaced him or her with a simulated electronic facsimile. Depending on where you stand on the Strong AI debate, we may have created a zombie that merely appears to behave like the person but has no consciousness.

But what if the brain is replaced only very gradually bit by bit as the disease slowly progresses. The patient undergoes a gradual transition from organic carbon-based hardware to artificial silicon-based hardware, all the while appearing outwardly to be the same person and apparently unaware of any change in identity or experience. Is there a point at which the person is half zombie and half conscious? What would that even mean?

This seems most unlikely to me. I don't think any change of identity has taken place. If this is true when the replacement happens gradually, I don't know why it should be true if the operation is done all at once.

It seems that the question cannot be resolved by any purely physical considerations, whether of the atoms composing the body or the hardware comprising the brain. Instead, what we really identify with is the mind. This mind is an emergent abstract object which is supported by the substrate of the physical brain but not existentially bound to it. Instead of being composed of atoms, it is formed by such stuff as memories, beliefs, mental abilities and techniques, personality traits, motives etc. It doesn't fundamentally matter which hardware supports it.

However if the mind is simply software which runs on the hardware of the brain, then this poses another problem. Software is simply a type of information and information can be copied.

Suppose that the hardware needed to support the processing performed by our prosthetic brain runs too hot and needs too much energy to be housed inside the person's skull. Perhaps the processing happens remotely and the prosthesis inside the skull sends and receives data to the real "brain" which is hosted on a server farm elsewhere. It's like the difference between playing a game on a local machine and using a service such as OnLive. "Stream of consciousness" takes on a whole new meaning.

If the brain processing is happening remotely on a cloud server farm, then it may not even be running on a dedicated piece of hardware. Cloud services are often implemented using virtual machines, in which what appear to be physical machines are actually emulated in software and are in fact independent of any particular physical hardware. This has many advantages because there's no particular problem if any physical piece of hardware fails, and it's easy to scale up as demand increases by simply duplicating machines.

But if a person's consciousness is hosted on a virtual machine, then that person's consciousness becomes relatively trivial to duplicate. We might imagine that there are two instances of the process running in parallel, receiving the exact same sensory inputs and giving the exact same controlling outputs. Outputs should be identical because inputs are identical and each mind is identical. This might be a reasonable implementation of  a fail-safe measure in case some physical hardware experiences a fault which could interrupt the consciousness of the patient with potentially lethal consequences.

But if the same mental process is running on two distinct physical machines, have we not now created two minds where once there was one? Each mind would perceive itself as being uniquely in control of the body and would not be able to detect the other. This is because both minds behave identically, being precise duplicates of each other. They are not fighting for control of the body because they act in unison.

So what existential difference is there if there are two copies of the algorithm rather than just one? If I remove one of the virtual machines hosting the mind, have I killed somebody? If I add yet another copy, have I created life? What has really changed? Because both copies are identical, it could be argued that there is in fact only one mind, but which is running on two virtual machines.

Unfortunately it's not so easy. What if there's a hardware fault? Perhaps a data transmission problem interrupts the connection of the body to mind A but not mind B. This may be experienced as lag or a temporary blackout. Once the connection is restored, the two minds are no longer acting in unison. One has had a slight difference of experience from the other, and due to chaos theory it's likely that this difference, however small, will only be amplified over time.

So now the two minds might actually be in contention. Detecting this, the service provider might opt to "kill" one of the minds (perhaps the one that experienced the fault) and replace it with a fresh clone of the other. But now we have actually deleted a unique personality, so perhaps this constitutes murder where the removal of a truly identical copy would not.

This is unsatisfactory. It seems unreasonable to suppose that a mere data transmission fault can confer a status of unique personhood upon a mind.

As with most of the paradoxes of identity which I will discuss, my solution is simply that we should recognise that our concept of identity is nothing more than instinct or intuition, and that it does not reflect absolute truth. The questions we pose ourselves when we ponder such paradoxes are meaningless. There are no right answers.

And so brains and minds fail as the sources of identity. This is not a problem for us only because we do not yet have the technology to clone brains or minds. We will need to grapple with the answers to these questions if that technology ever does arise.

If no physical property can determine identity, and if even emergent properties such as minds are insufficient, then perhaps there is some mystical essential attribute that people possess that confers identity upon them.

I will discuss this point of view in the next post.

No comments:

Post a Comment