Imagine if the only books ever written were children's books. People would think books in general were a joke. I think the situation with computers and algorithms today is similar: people don't understand the
ridiculous potential power of an algorithm because they only have experience with the "children's algorithms" that are running on their PC today.
Take John Searle's famous
Chinese room thought experiment, which goes like this:
A man who doesn't speak Chinese is alone in a room with a big book of rules. The rule book gives detailed procedures for how to write a response in Chinese to any question written in Chinese.
An interlocutor standing outside the room writes a question in Chinese on a strip of paper and slips it under the door. To the man inside, the paper is full of meaningless squiggles. But by painstakingly following the syntactic rules in the rule book, he is able to put together a string of Chinese characters that reply to the interlocutor in perfect Chinese.
Searle claims it's obvious that nothing in the room has a "real understanding" of Chinese, neither the man nor the book. Therefore Searle concludes that "real understanding" is not something a computer could ever have, since a computer is just a rule-following system like the man and the book in the Chinese room.
Searle's Chinese room is a great thought experiment, but it's ultimately a non-insight. I just don't buy that nothing in the room has a "real understanding" of Chinese. Want to know what really understands Chinese? It's quite simply the "rule book".
You have to realize that the "rule book" is not a "rule children's book". You can be sure it has a lot more pages than any actual book could have. Maybe it doesn't have as many pages as a human has neurons (100 billion), but it could easily be a million-pager like the code for Microsoft Word.
And you can be sure the person in the room would be flipping among the pages of instructions a lot slower then the firing rate of neurons. Considering that brains have billions of neurons all firing up to 100 times each second, we're looking at a trillion-fold speed difference between these two language-processing systems.
If you watched an actual Chinese speaker with their brain slowed by a factor of a trillion, you'd see slow and soulless neuron-level computation. When you compare that to watching a man flipping around in a rule book, the Chinese room doesn't necessarily seem like the more mechanical system.
Both systems get their soul back when you zoom out. Imagine zooming out on the Chinese room enough that you can watch a million book-flipping years pass while the interlocutor is waiting for a yes-or-no answer. If you watch that process on fast-forward, you'll see a chamber full of incredibly complex and inscrutable machinery, which is exactly what a Chinese speaker's head is.
The Chinese room is supposed to persuade you that a system made out of mere pages can't "really understand" language. But it doesn't address why a system made out of mere neurons shouldn't have the same limitation. To me it seems clear that the two systems have similar architectures and possess similar powers of understanding.