A
thought experiment due to John Searle attacking the strong AI postulate. A person in a locked room carries on a dialogue with us
by way of Chinese written on paper passed back and forth under the door. The person in the room responds according to instructions
stored in a vast library of rule-books, and does not understand Chinese.
Since the person
doesn't understand the language and the rule-books obviously lack
understanding, Searle claims that there is no real language knowledge
involved. Searle likens dialogue with a computer to this situation, and hopes
that it makes it clear why he says that computers are not aware. The scenario
has been widely debated, but proponents of strong AI point out that the system room + person
could be said to possess knowledge of Chinese, in just the same way as the neurons in a human brain (which themselves lack
knowledge about Chinese) can form a system
that can know the language. See The Chinese Room Argument for more
information.
Source
|