Thursday, February 17, 2011

Searle's Chinese Room: Slow Motion Intelligence

Imagine if the only books ever written were children's books. People would think books in general were a joke. I think the situation with computers and algorithms today is similar: people don't understand the ridiculous potential power of an algorithm because they only have experience with the "children's algorithms" that are running on their PC today.

Take John Searle's famous Chinese room thought experiment, which goes like this:
A man who doesn't speak Chinese is alone in a room with a big book of rules. The rule book gives detailed procedures for how to write a response in Chinese to any question written in Chinese.

An interlocutor standing outside the room writes a question in Chinese on a strip of paper and slips it under the door. To the man inside, the paper is full of meaningless squiggles. But by painstakingly following the syntactic rules in the rule book, he is able to put together a string of Chinese characters that reply to the interlocutor in perfect Chinese.

Searle claims it's obvious that nothing in the room has a "real understanding" of Chinese, neither the man nor the book. Therefore Searle concludes that "real understanding" is not something a computer could ever have, since a computer is just a rule-following system like the man and the book in the Chinese room.

Searle's Chinese room is a great thought experiment, but it's ultimately a non-insight. I just don't buy that nothing in the room has a "real understanding" of Chinese. Want to know what really understands Chinese? It's quite simply the "rule book".

You have to realize that the "rule book" is not a "rule children's book". You can be sure it has a lot more pages than any actual book could have. Maybe it doesn't have as many pages as a human has neurons (100 billion), but it could easily be a million-pager like the code for Microsoft Word.

And you can be sure the person in the room would be flipping among the pages of instructions a lot slower then the firing rate of neurons. Considering that brains have billions of neurons all firing up to 100 times each second, we're looking at a trillion-fold speed difference between these two language-processing systems.

If you watched an actual Chinese speaker with their brain slowed by a factor of a trillion, you'd see slow and soulless neuron-level computation. When you compare that to watching a man flipping around in a rule book, the Chinese room doesn't necessarily seem like the more mechanical system.

Both systems get their soul back when you zoom out. Imagine zooming out on the Chinese room enough that you can watch a million book-flipping years pass while the interlocutor is waiting for a yes-or-no answer. If you watch that process on fast-forward, you'll see a chamber full of incredibly complex and inscrutable machinery, which is exactly what a Chinese speaker's head is.

The Chinese room is supposed to persuade you that a system made out of mere pages can't "really understand" language. But it doesn't address why a system made out of mere neurons shouldn't have the same limitation. To me it seems clear that the two systems have similar architectures and possess similar powers of understanding.

7 comments:

  1. While I completely agree that Searle's Chinese Room argument fails in depicting what knowing Chinese means, I think you're ignoring the real issue here.
    As I see it, the question's not about understanding or speed, it's about consciousness. A Chinese speaker has a conscious experience of translating that the Chinese room as a whole (or any equivalent program) does not have.
    As any human being knows, we inherently view the world as a subjective conscious experience. When we understand a phrase in a foreign language, we know we do, we feel we do.
    The experiment you propose - to zoom out in order to see the Chinese room actually does understand Chinese - refutes Searle's argument. But it doesn't solve the "hard problem" of consciousness.

    No matter how far machine translation will go in the future, until we figure out how our brains create consciousness we won't be able to build a Chinese room that knows it understands it.
    Now that's one tough nut to crack ;)

    ReplyDelete
    Replies
    1. Bullshit. Consciousness is an illusion.

      All these machine learning algorithms are lacking is experience, memory, and a balanced reward goal seeking behavior of experimentation to learn from in mass to strategize/maximize their individual growth thru reflection and collective collaboration.

      That nut will be cracked sooner than you think. ;)

      Delete
  2. Hamutalm,

    If the Chinese room is able to have convincing conversations then it's quite possible that the system is conscious. It's also possible that the system is unconscious.

    As you say, the problem of consciousness is separate from the question of whether "machines" can have "real understanding".

    Whatever this "consciousness" stuff is, it's almost certainly something that can emerge from the operation of any Turing-complete substrate, and not just neurons.

    ReplyDelete
  3. Exactly.
    And may I add that if consciousness can indeed emerge from a machine, I sure as hell hope we're around to see it happen!

    Hamutal

    ReplyDelete
  4. Interestingly the logical conclusion of this argument is a proof of intelligent design

    ReplyDelete
  5. @hamultan - GPT4 is the proof that consciousness is just zoomed out version of billions of small interactions.

    ReplyDelete
  6. Agreed! https://marginalrevolution.com/marginalrevolution/2022/04/the-chinese-room-thinks.html

    ReplyDelete