Tuesday, January 25, 2011

Chinese Room

Reference
Minds, brains, and programs
John R. Searle
Wikipedia 


Summary
This article describes an experiment described by John Searle in which he addresses the question: if a machine can behave intelligently, does it really understand? Searle imagines a computer in a room which is being fed instructions in Chinese and the computer is able to communicate back with the correct answer. The essence of the controversy is Seale's intention to express that you can create a program to behave intelligently, but you cannot give a mind of its own to a computer. The main controversy arose as a counterargument to functionalism and computationalism. Searle identifies two propositions called strong Al and weak Al. Strong Al was the idea that the simulation was a mind, and weak Al the idea that the simulation modeled a mind.

Discussion
I definitely agree with refuting the idea of believing that a computer has literally understand Chinese (for purposes of this discussion). The computer might be able to act as intelligently as the program that it is told to run. A program when it produces the right output can perhaps make some people think that indeed it has some sort of intelligence, but the "intelligence" is coming from a simulation modeling a mind that can foreshadow the solution of that given problem. I believe people should be extremely careful when interchanging the words brain and mind. There is a big difference between the mind and the brain, however we cannot completely differ one from another and I  include myself in this group of people. I apologize if this discussion was a bit confusing but I found the article to be a bit confusing, not even going to lie there.

No comments:

Post a Comment