Friday 2 November 2012

In Defence of Strong AI: The Chinese Room Refuted



It has become obvious to me that in order to fully explain my world view, I must first defend strong AI.

Strong AI is the position that it is possible in principle to have an artificial intelligence, implemented as a computing machine much like the machines we have today, which would be capable of consciousness, feeling and sensation. This is in contrast to the position of weak AI, which only claims that we could in principle make a computing machine which would merely behave as if it were conscious.

This is important not just for the ethics of how we might treat sentient computers but because it cuts to the heart of what it is that our minds actually are. If our minds are dependent on something other than simple computation, this puts limits on certain questions we might ask about our universe. For example, if computation cannot produce consciousness, then we immediately know that we cannot all be living in a computer simulation.

Firstly, I would like to point out the problems I see with some of the most popular criticisms of strong AI, starting with perhaps the most famous: John Searle's "The Chinese Room". Later I will attempt to build a positive case of my own.

To summarise The Chinese Room briefly, Searle imagines a philosopher locked in a room with a rulebook which tells him how to process incoming Chinese symbols and ultimately output Chinese symbols in response. While this rulebook and the equipment in the room allow him to successfully carry out a conversation in Chinese as though he understands what is discussed, in fact he understands nothing. In his view, this shows that a computer is simply a symbol manipulation machine and can never truly understand anything.

In my opinion, The Chinese Room is a smart and insightful thought experiment which is very persuasive in its argument against strong AI, but ultimately it doesn't work. Essentially, The Chinese Room is an appeal to intuition. Unfortunately, intuitions can be misleading.

Searle himself outlines a number of objections to The Chinese Room about thirty minutes into this talk.


One objection he mentions is that The Chinese Room could insist that it understands Chinese just as forcefully and convincingly as a real Chinese speaker, so perhaps this means it actually does understand Chinese.

I agree with Searle that this is a weak argument. Of course the mere assertion that one understands Chinese (even in Chinese) is no cause to believe that Chinese is actually understood. It really isn't hard to write a computer program that can print out a statement claiming to understand anything you wish. So naive is the argument, as presented by Searle, that I suspect he is mischaracterising it. There may be a more sophisticated version of this, but let's let that slide for now.

Some such as Hans Moravec have suggested that if the computer were given the body of a robot, then meaning would come from that robot's interactions with the world. Searle quite rightly refutes this by arguing that if the philosopher in The Chinese Room were instead given inputs representing the robot's senses, and the output were interpreted as instructions for the robot's various motors, then the philosopher inside the room is still no closer to understanding the meaning of all the symbols he manipulates.

However, there remains a conclusive argument which I believe Searle has not defeated. This argument is that it is not the philosopher which understands Chinese, but the system as a whole. The philosopher is simply playing the role of the CPU, and nobody is claiming that the CPU understands anything. Rather it is the combination of the physical hardware (including the philosopher, the room and the rulebook) and the software they are running (the rules in the rulebook) that actually understand Chinese.

Searle dismisses this argument rather cavalierly by treating it as ridiculous. From the talk above:

When I first heard this, I was flabbergasted, and I said to the guy (this was in a public debate), and I said "You mean the room understands Chinese?" 
Well, you gotta admire courage! He said "Yes, the room understands Chinese!" 
...
You gotta admire courage! I would never have the nerve to say that in public, but it's not going to work if you think about it.
Why not? Well, what I immediately said was "Ok, get rid of the room!"
Imagine I work outdoors, I memorize the rulebook, I do all the calculations in my head, and I'm in the middle of an open field, there is no room there, there's just me getting questions and giving back answers. It seems to me in that case it's exactly the same as the original case -- nothing is added by saying it's the room that understands.

I don't think this does justice to the "system" argument at all. Of course it's not the room that understands Chinese -- the room is not the system. Get rid of the room, get rid of the paper and do all the calculations in your head -- you still haven't got rid of the system, you've just changed its physical form somewhat.

I believe Searle would argue that in this scenario the system is all in his head. He is the only component of the system, and so if he does not understand Chinese then what remains to do the understanding?

The answer to this problem is probably going to be hard to swallow, but let me take a moment to remind you just how unrealistic this scenario is. Searle is proposing that a philosopher might in principle take on board and memorise a system of rules so complex and comprehensive that they allow the perfect simulation of a complete human mind, specifically that of a Chinese speaker. This is obviously absurd as no human brain could ever be capable of perfectly simulating another without the use of external equipment of some kind.

Of course we can't say he's wrong just because this is an unrealistic thought experiment. It is indeed possible in principle for a godlike being of immense mental capacity (let's call him Super-Searle) to perform the feat of simulating the mind of a human in this fashion. This absurdity may not in itself defeat Searle's argument, but it does allow a correspondingly absurd counter-argument. Just as with the impossible lottery, if we ponder absurd problems then even correct conclusions can seem absurd.

In my view, what's actually happening as Super-Searle performs the calculations of The Chinese Room is that his mind has become the substrate upon which a new and distinct mind is running. Just as Super-Searle's brain is the hardware upon which the software of his mind is running, so his mind is the virtual hardware upon which the software of the Chinese mind is running. Super-Searle might not understand Chinese, but the system of rules which he is executing has given rise to a mind which does.

This should not be so strange a concept to people who are familiar with virtual machines in computing. It is possible for a piece of software on one machine to emulate the hardware of another, and so ultimately to run software which is intended for that alien hardware. For example, I could install an emulator of the Super Nintendo Entertainment System on my laptop, and then use it to play a game such as Street Fighter II which was never intended to run on a modern computer. As with any other program, my PC is providing the hardware upon which the software of the SNES emulator runs. However, that emulator in turn provides the virtual hardware upon which the software of the game runs. My computer doesn't "understand" the game, it only "understands" the emulator. It is the emulator that "understands" the game.

If it is possible for a modern, fast computer to act as host and substrate for an older, obsolete computer, then I don't see why it might not be possible for a god-like mind to host a human-like mind within it. Indeed, I believe there are religions which posit that we are all elements in the dream of a sleeping God.

Arguments similar to mine have been posed numerous times by people such as Marvin Minsky. I must confess I find it quite vexing that Searle dismisses the "system" argument so easily and condescendingly in his lecture without discussing the "virtual mind" idea at all.

The closest he comes to addressing the issue is to assert that a simulation of a thing does not take on all the attributes of the thing itself -- that simulation is not duplication.
"No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched."
This argument assumes that consciousness is not an emergent property of computation. Simulation of a given computation is equivalent to that computation. I would argue that simulation actually is duplication if what you are simulating is a mathematical process. Whether my computer performs a computation directly or simulates a SNES which performs that computation, the same computation has taken place. Now, if it is true that consciousness is in fact an emergent property of the computational process carried out by the brain -- the very position Searle is attempting to prove untrue, then simulation of that process would give rise to the same consciousness. Searle therefore cannot prove strong AI false without relying circularly on his conclusion that it is false.

None of this demonstrates that strong AI is possible or that my interpretation is correct, but it does show that The Chinese Room is potentially a deeply flawed analogy and that Searle has proven nothing definitively with this popular and stimulating thought experiment. His confident, laughing dismissal of his opponents' arguments is simply not warranted.

5 comments:

  1. Disagreeable, keeping the dialog alive, there it goes, my comment (http://txtpub.blogspot.com.br/2013/08/chinese-rooms-ultimate-output.html). Not sure it will match your expectations, though I'm sure it's also a valid way to keep talking. Cheers.

    ReplyDelete
  2. Did anyone actually make the claim that being a good chatbot is sufficient for understanding? Because it seems to me like Searle is debunking a straw-man.

    Being a good chatbot might be a result of understanding, but it is no guarantee of it, and I don't see anyone making that claim as if it were something worth attacking with a grand "thought experiment".

    ReplyDelete
  3. >Did anyone actually make the claim that being a good chatbot is sufficient for understanding? <

    No, but I would say having understanding is necessary to be a good chatbot.

    Keep in mind that what we're not talking about a superficial trick like we see in Eliza. The chatbot has to pass the Turing test, standing up to rigorous scrutiny.

    If the chatbot is sophisticated enough to show the same level of insight, creativity and humour as a human conversationalist, I think it's a good bet that that chatbot is intelligent and has understanding.

    But even if it is possible to achieve this without understanding, there are those who think that machine understanding is possible (and I am one of them). Searle is attacking this position - the notion of any computer program understanding anything at all. You can take it that the Chinese Room is supposed to be such an intelligent understanding program.

    Searle then thinks he has shown that such a thing cannot be, that it is a nonsensical proposition. He is wrong.

    ReplyDelete
  4. Yes, I agree that Searle's argument is harmless. I just don't know what the strong A.I. theorist has to worry about. If they would say that the mind would arise from running a sufficiently complex program on a system,then really they should say the same about the case of the Chinese Room, which should also be sufficiently complex. This is no different from the human mind, which is just the program being run by the brain. With regards to the man in the room, there is no reason why he should have any part in the understanding, since he doesn't even answer any of the questions at all. He just runs the program, but everything else is done by the program.

    Searle's argument is only a problem if you think that CPUs think, but I don't know anybody who would claim that they do.

    ReplyDelete
  5. I wonder if the true purpose of Science is to remind humans continually to get over themselves. No, the sun does not orbit around you. No, you're not the only critter that uses tools. And guess what -- the activity of your big brain isn't something that's qualitatively beyond that of that ant that's running for its life.

    We're just more complex.

    I wonder why ppl like Searle don't feel terribly old fashioned.

    Intuition really does

    ReplyDelete