The Chinese Room Argument

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

I've heard this described before, and I don't think it refutes 'strong AI', as he puts it, at all. Here's why:

Searle describes himself as analogous to the CPU - which he is in this thought experiment. And he says he doesn't understand Chinese, which he doesn't. But nobody is claiming that the CPU running an AI understands what it is doing, any more than anyone claims the molecules within our synapses know what they're doing.

To put it another way: Searle puts himself in the box and contrasts his understanding of English with his ignorance of Chinese, and on that basis says there is no understanding going on in the box. But that's an insupportable leap. He isn't doing any understanding, but the combination of him, the rulebook, and the symbols are doing the understanding. He has made himself into just one cog in a bigger machine, and the fact a single cog can't encapsulate the entire function of the machine is irrelevant.

๐Ÿ‘๏ธŽ︎ 531 ๐Ÿ‘ค๏ธŽ︎ u/whentheworldquiets ๐Ÿ“…๏ธŽ︎ May 13 2020 ๐Ÿ—ซ︎ replies

That was a great explanation but it's a fairly outdated analogy. Works for good old fashioned AI, not so much for new forms using machine learning.

We have no idea why AlphaGo made the moves it did when it beat the best Go player in the world. One move in particular completely dumbfounded the commentators and lots of people thought the AI must have made a mistake. It turned out to be a crucial move that enabled it to win the game.

No one showed AlphaGo how to play like that because nobody knew about it. Go players are now learning from it rather than the other way around. If the person in the Chinese room is teaching Chinese speakers how to speak Chinese, does the analogy still hold up?

๐Ÿ‘๏ธŽ︎ 339 ๐Ÿ‘ค๏ธŽ︎ u/SeudonymousKhan ๐Ÿ“…๏ธŽ︎ May 13 2020 ๐Ÿ—ซ︎ replies

Make a better test than Turingโ€™s.

๐Ÿ‘๏ธŽ︎ 45 ๐Ÿ‘ค๏ธŽ︎ u/thesnuggler83 ๐Ÿ“…๏ธŽ︎ May 13 2020 ๐Ÿ—ซ︎ replies

I always think of this comic when talking about The Chinese Room.

To me, it's pretty easy to keep kicking that can down the road.

๐Ÿ‘๏ธŽ︎ 25 ๐Ÿ‘ค๏ธŽ︎ u/rmeddy ๐Ÿ“…๏ธŽ︎ May 13 2020 ๐Ÿ—ซ︎ replies

This guy is using logical fallacies of someone who's religious.

๐Ÿ‘๏ธŽ︎ 9 ๐Ÿ‘ค๏ธŽ︎ u/Shitymcshitpost ๐Ÿ“…๏ธŽ︎ May 13 2020 ๐Ÿ—ซ︎ replies

Can someone help me with this? Because this does seem like an effective argument against the sufficiency of the Turing test, but not against strong AI itself. By which I mean: we do not have a sufficient understanding of consciouness to be certain it is not just as he describes- receive stimulus, compare to rules, output response-- but with much, much more complicated rulesets that must be compared against.

So yes, the chinese room refutes the idea that the a Turing complete computer understands chinese (or whatever input), it fails to demonstrate that from the outside (us as observers of the room) we can be certain that the box in questions is not conscious. I have a feeling that I just am taking this thought experiment outside its usefulness. Can anyone point me in the direction of the next step?

๐Ÿ‘๏ธŽ︎ 34 ๐Ÿ‘ค๏ธŽ︎ u/HomicidalHotdog ๐Ÿ“…๏ธŽ︎ May 13 2020 ๐Ÿ—ซ︎ replies

Due to limitations of human observation, is it not true that a sufficiently complex AI actually being sentient and one merely appearing to be sentient are functionally indistinguishable to us? The limitations of the human experience prove this to be true, as it is the case for how we consider other human minds.

In an almost Truman Show-esque analogy: Imagine that everyone in your life, except yourself, is an actor with a script. This script tells them what to do, what to say, how to portray every detail of their interactions with you in an almost infinite number of situations. In effect, artificially reproducing the experience of your whole life down to the tiniest of details.

How could you distinguish those people from your own consciousness, determine that they are genuinely sentient as you are, rather than following a script? They are essentially all "Chinese Rooms" themselves. Descartes famously created the maxim "I think, therefore I am" as a demonstration that only his own consiousness was provable. The same could be said here.

Break down the neurology of the human mind down to a granular enough scale, and you have basic inputs and outputs, simulatable processes on a sufficiently complex machine. Give someone the tools, materials, and enough time, and if you gave them such a model of a person's human brain, they could recreate it exactly. How is that any different to an AI?

The "context" that Searle refers to is just as syntactical as the rest of the operations a machine might simulate. We cannot prove that our own meanings and experiences are not equally logical, let alone those of an AI. He may state that he has greater context and meaning attached to his logic than that of a machine, but it could just as easily be simulated within his own neurones - a "program" running on his own organic brain.

๐Ÿ‘๏ธŽ︎ 9 ๐Ÿ‘ค๏ธŽ︎ u/sck8000 ๐Ÿ“…๏ธŽ︎ May 13 2020 ๐Ÿ—ซ︎ replies

"If you can't tell the difference, does it matter?"

๐Ÿ‘๏ธŽ︎ 6 ๐Ÿ‘ค๏ธŽ︎ u/Ragnarotico ๐Ÿ“…๏ธŽ︎ May 13 2020 ๐Ÿ—ซ︎ replies

The illusion of mind emerges from the operation of the rule book.

In the case of the human mind, the rule book has been created by a long process of evolution. Humans that had a defective rule book didn't reproduce that rule book further. And humans that had mutually compatible rule books that also promoted survival, could propagate those rule books.

The illusion of the Chinese Room emerges from philosophers over estimating their role in the operation of the Chinese Room.

๐Ÿ‘๏ธŽ︎ 5 ๐Ÿ‘ค๏ธŽ︎ u/CommissarTopol ๐Ÿ“…๏ธŽ︎ May 13 2020 ๐Ÿ—ซ︎ replies
Captions
TO UNDERSTAND THE CHINESE ROOM ARGUMENT, YOU HAVE TO UNDERSTAND THE VIEW IT'S OPPOSED TO. NOW, THE VIEW IT'S OPPOSED TO HAS TO DO WITH THE COMPUTATIONAL THEORY OF THE MIND. WE USE COMPUTERS TO SIMULATE THE HUMAN MIND, AND THERE ARE DIFFERENT THEORIES ABOUT THAT. ONE VIEW SAYS, WELL, YOU CAN SIMULATE THE MIND AS YOU CAN SIMULATE ANYTHING. I CALL THAT WEAK ARTIFICIAL INTELLIGENCE OR WEAK AI. BUT THERE'S A MUCH STRONGER VIEW THAT SAYS, NO, IF YOU'VE GOT A SIMULATION, ONE THEY CAN PASS THE TURING TEST-- THAT IS, CONVEYED IN A WAY THAT'S JUST LIKE A HUMAN BEING-- THEN IT DOESN'T JUST SIMULATE THE MIND. YOU HAVE LITERALLY CREATED A MIND. IF YOU DESIGN A PROGRAM THAT CAN SIMULATE A HUMAN MIND SO THAT OTHER HUMAN BEINGS COULDN'T TELL THE DIFFERENCE BETWEEN THE BEHAVIOR OF THE COMPUTER AND THE BEHAVIOR OF A HUMAN BEING, THEN YOU'VE LITERALLY CREATED A MIND WITH ARTIFICIAL INTELLIGENCE. I CALL THAT VIEW STRONG AI. AND THAT VIEW SAYS, THE MIND IS TO THE BRAIN AS THE PROGRAM IS TO THE HARDWARE. SO IF WE'VE GOT THE RIGHT PROGRAM, THEN IT DOESN'T MATTER WHAT THE HARDWARE IS, BECAUSE WE'VE CREATED A MIND. CREATING A PROGRAM JUST IS CREATING A MIND, BECAUSE THE RIGHT PROGRAM WITH THE RIGHT INPUTS AND OUTPUTS, AT THE IMPLEMENTATION OF THAT PROGRAM, IS LITERALLY THE OPERATION OF A MIND. OK. THAT'S STRONG AI. THE CHINESE ROOM REFUTES THAT CLAIM. IT'S A VERY SIMPLE REFUTATION. WHENEVER ANYBODY GIVES YOU ANY THEORY OF THE MIND, ALWAYS ASK, HOW WOULD IT BE FOR ME? SO THE CHINESE ROOM ARGUMENT SAYS, IMAGINE THAT YOU CARRY OUT THE PROGRAM, WHICH IS SUPPOSED TO BE THE PROGRAM OF A MIND EXECUTING A CERTAIN KIND OF MENTAL CAPACITY. AND [INAUDIBLE] IT FOR SOMETHING THAT YOU DON'T UNDERSTAND. I DON'T UNDERSTAND CHINESE, SO I USED CHINESE. IF YOU UNDERSTAND CHINESE, PICK SOMETHING DIFFERENT. PICK ARABIC OR SWAHILI OR SOMETHING YOU DON'T UNDERSTAND. NOW, HERE'S HOW I IMAGINE IT. I AM LOCKED IN A ROOM, AND IN THIS ROOM ARE A LOT OF BOXES OF CHINESE SYMBOLS. I ALSO HAVE A RULE BOOK WRITTEN IN ENGLISH THAT TELLS ME WHAT TO DO WITH THE CHINESE SYMBOLS. NOW, I GET IN LITTLE BATCHES OF CHINESE SYMBOLS, AND I FOLLOW THE RULE BOOK THAT TELLS ME WHAT I'M SUPPOSED TO DO WITH THOSE, AND THEN I GO THROUGH ALL THESE STEPS, AND I GIVE BACK LITTLE BATCHES OF CHINESE SYMBOLS. NOW, UNKNOWN TO ME, THE PEOPLE OUTSIDE THE ROOM CALL THE LITTLE BATCHES THEY GIVE ME QUESTIONS AND THE LITTLE BATCHES I GIVE THEM ANSWERS. THE RULE BOOK THEY CALL THE COMPUTER PROGRAM, AND THE BOXES OF SYMBOLS THAT I USE, THEY CALL THE COMPUTER DATABASE. ME, THEY CALL THE COMPUTER OR, IF YOU LIKE, THE CENTRAL PROCESSING UNIT. SO I'M THERE, AND I GET IN THE QUESTIONS. I FOLLOW THE RULES IN THE PROGRAM, AND I GIVE BACK ANSWERS. I GIVE BACK ANSWERS TO THE QUESTIONS IN CHINESE. BUT, NOW, I DON'T KNOW ANY OF THAT. I DON'T UNDERSTAND A WORD OF CHINESE. AND WE SUPPOSE THAT THEY GET SO GOOD AT WRITING THE PROGRAMS, AND I GET SO GOOD AT SHUFFLING THE SYMBOLS THAT, AFTER A WHILE, I PASS THE TURING TEST. MY ANSWERS ARE INDISTINGUISHABLE FROM A NATIVE CHINESE SPEAKER. OK. NOW, HERE'S THE POINT. NO MATTER HOW GOOD THE PROGRAM, NO MATTER HOW EFFECTIVE I AM IN CARRYING OUT THE PROGRAM, AND NO MATTER HOW MY BEHAVIOR SIMULATES THAT OF A CHINESE SPEAKER, I DON'T UNDERSTAND A WORD OF CHINESE. AND IF I DON'T UNDERSTAND CHINESE ON THE BASIS OF IMPLEMENTING THE PROGRAM, NEITHER DOES ANY OTHER DIGITAL COMPUTER ON THAT BASIS, BECAUSE THAT'S ALL A COMPUTER HAS. A COMPUTER HAS A SET OF RULES FOR MANIPULATING SYMBOLS. NOW, YOU CAN SEE THE POWER OF THIS IF YOU CONTRAST WHAT IT'S LIKE FOR ME TO ANSWER QUESTIONS IN ENGLISH AND WHAT IT'S LIKE FOR ME TO ANSWER QUESTIONS IN CHINESE. IF YOU IMAGINE THESE GUYS ALSO ASK ME QUESTIONS IN ENGLISH, MY ANSWERS WILL BE INDISTINGUISHABLE FROM A NATIVE ENGLISH SPEAKER, BECAUSE I AM A NATIVE ENGLISH SPEAKER. MY ANSWERS TO THE QUESTIONS IN CHINESE ARE INDISTINGUISHABLE FROM A NATIVE CHINESE SPEAKER BECAUSE I'VE BEEN PROGRAMMED. ON THE OUTSIDE, I LOOK EXACTLY THE SAME. I'M ANSWERING QUESTIONS IN ENGLISH, AND I'M ANSWERING QUESTIONS IN CHINESE. IT'S TOTALLY DIFFERENT ON THE INSIDE. ON THE INSIDE, I UNDERSTAND ENGLISH, NO PROBLEM. BUT ON THE INSIDE, I DON'T UNDERSTAND CHINESE AT ALL. IT'S JUST MEANINGLESS SYMBOLS. SO WHAT'S THAT DIFFERENCE? IF IT LOOKS THE SAME ON THE OUTSIDE, WHAT'S THE DIFFERENCE ON THE INSIDE? AND THE ANSWER, AGAIN, I THINK, IS OBVIOUS. I KNOW WHAT THESE WORDS MEAN. IN ENGLISH, I HAVE MEANINGS ATTACHING TO THE WORDS. NOW, IN CHINESE, ALL I'VE GOT IS SYMBOLS, FORMAL, SYNTACTICAL OBJECTS. BUT, NOW, THAT'S ALL ANY COMPUTER HAS. SO THE DISTINCTION BETWEEN THE COMPUTER AND THE MIND IS THAT THE COMPUTER JUST MANIPULATES FORMAL SYMBOLS, SYNTAX, SYNTACTICAL OBJECTS, WHEREAS MINE HAS SOMETHING IN ADDITION TO SYMBOLS. IT'S GOT MEANINGS. THAT'S NOT A WEAKNESS OF THE COMPUTER. THAT'S WHY WE USE COMPUTERS. YOU CAN FORGET ABOUT THE MEANINGS AND JUST GO WITH THE SYNTAX, JUST GO WITH THE MEANINGLESS SYMBOLS. BUT IF YOU TALKING ABOUT HUMAN MINDS, THE ESSENTIAL THING ABOUT THE MIND IS IT DOESN'T JUST HAVE FORMAL SYMBOLS. IT'S GOT MENTAL CONTENTS. SO, REALLY, THE CHINESE ROOM ARGUMENT IS KIND OF A THREE-STEP ARGUMENT IF SET IT OUT AS A FORMAL ARGUMENT. THE FIRST STEP IS, PROGRAMS CONSISTENT ENTIRELY IN SYNTACTICAL ENTITIES. THE IMPLEMENTED PROGRAM IS A SET OF SYNTACTICAL PROCESSES. BUT MINDS HAVE GOT SOMETHING MORE. THEY GOT SEMANTICS. BUT NOW-- AND THIS IS WHAT THE CHINESE ROOM SHOWS-- JUST HAVING THE PROGRAM BY ITSELF ISN'T SUFFICIENT FOR THE SEMANTICS, BUT FROM THAT, IT FOLLOWS THAT PROGRAMS AREN'T MINDS. AND THAT'S ANOTHER WAY OF SAYING STRONG AI IS FALSE.
Info
Channel: INTELECOM
Views: 58,512
Rating: 4.8498826 out of 5
Keywords: Intelecom, Learning, John Searle, philosophy, AI, artificial intelligence, Turing test, computers, computer programs, Alan Turing, brain, mental, Chinese room argument, simulation
Id: 18SXA-G2peY
Channel Id: undefined
Length: 5min 42sec (342 seconds)
Published: Mon Jul 16 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.