Discuss the impact of the Chinese room argument on the possibility

Apr 10, 2024

Length – 1800 words (with
bibliography)

Submit to – the dropbox (essay
folder)

Don't use plagiarized sources. Get Your Custom Essay on
Discuss the impact of the Chinese room argument on the possibility
Just from $13/Page
Order Essay

Document – Word document (not
PDF). The file should be titled firstnamelastnamestudentnumber.doc

Citation style – any, so long as it
is uniform, includes page numbers whenever relevant, and you reference everything
that you quote or paraphrase (it’s plagiarism if you don’t).

Feedback – When your essay is
graded, you will get feedback in the dropbox. You will see comments in the
feedback box and added to your file (I will attach a version of it with
comments added).

Grading – Based on knowledge
of course content, structure (e.g. does the introduction do it job? Does the
essay deliver what was promised in the introduction?), argument (e.g. is the
thesis properly defended? does the argument flow well?, are there unsupported
claims?), clarity (e.g. are there vague claims?)

Essay topic

Discuss the impact of
the Chinese room argument on the possibility of human-level AI.

In other words: Is
human-level AI possible?
Discuss the Chinese room argument to support your
thesis.

You should come up
with your own thesis, separate from what Searle has already argued (in other
words, don’t just repeat his argument – ask yourself what you could say to
convince someone who disagrees with Searle), or of what others argued against
Searle. You can use these arguments as tools to support your thesis.

When defending your
thesis in the essay, consider the following points:

  1. Consider
    the Turing Test. Is it an accurate test of intelligence? Why or why not?
  2. Which
    objections raised against the Chinese Room argument are the most
    persuasive?
  3. Were
    Searle’s responses to these objections satisfactory?
  4. If
    you think Searle is right, could he have come up with better responses?
    Explain how they could be improved.
  5. Consider
    examples from the GPS, SHRDLU to illustrate your points.
  6. Does
    connectionism change anything?

In
your essay, you should consider a counter-argument to your thesis, and defend
it from it.

NOTES:

Reading Notes and Video URL (information for
writing this essay)

—–

The Chinese Room

Searle asks you to imagine a closed
room with an opening that you can use to enter questions written in Chinese. If
you wait a bit, through the opening you eventually get an answer to the
question, also written in Chinese. With the room seen as a black box, it
appears as if the room is able to carry a perfectly intelligent conversation in
Chinese. Searle then adds some more detail. Inside the room there is an
operator that does not speak a word of Chinese. He has detailed rules for
manipulating Chinese symbols that he knows how to follow. Despite the fact that
the room behaves as if it understands Chinese, we know that there is no
understanding, because we know that the man inside does not really understand
Chinese. The metaphor is used to point out that a computer that is a mere
symbol manipulator does not understand anything. Just like the man in the room,
the computer is merely able to follow instructions on how to manipulate
symbols.

The SEP article on the Chinese Room
Argument summarizes Searle’s argument as a reduction ad absurdummeant
to show that Strong AI (the claim that human level AI is possible) is false:

  1. If Strong AI is true, then there is a program for
    Chinese such that if any computing system runs that program, that system
    thereby comes to understand Chinese.
  2. I could run a program for Chinese without thereby
    coming to understand Chinese.
  3. Therefore Strong AI is false.

Critics of Searle’s conclusion came up
with a number of replies: the systems, robot, brain simulator, other minds, and
intuition replies. I will not describe them here, since the SEP entry on
Searle’s paper does a good job of summarizing them.

In the aftermath of the Chinese Room
Argument, some concluded that Classical AI was a failed project because it was
based on the PSSH, and that Connectionism (artificial neural networks) was the
way to go. Other were more pessimistic, and concluded that Strong AI was simply
impossible. One aspect that complicates things is that the alternative
connectionist networks are generally simulated in Von Neumann machines – common
computers (we will discuss this in a separate module).

Searle in 2009

Three decades later, Searle seems to
have moved to a slightly different position. He seems to place less weight on
biology, and he is now (in the age of neural networks) more keen to specify
that symbol-processing machines (rather than machines in general) cannot think.

Excerpt of an interview from the Machines
Like Us website (here)

MLU: Professor Searle, thank you for joining us. I’ll get
straight to the issue that
Machines Like Us readers will be interested
in: can a computer think?

JS: It all depends on what you mean by “computer” and by
“think.” I take it by “thinking” you mean conscious thought processes
of the sort which I am now undergoing while I answer this question, and by
“computer” you mean anything that computes. (I will later get to a more precise
characterization of “computes”). So construed all normal human beings are
thinking computers. Humans can, for example, do things like add one plus one to
get two and for that reason all such human beings are computers, and all normal
human beings can think, so there are a lot of computers that can think,
therefore any normal human being is a thinking computer.

People who generally ask this question,
can computers think?, really don’t mean it in that sense. One of the questions
they are trying to ask could be put this way: Could a man-made machine — in
the sense in which our ordinary commercial computers are man-made machines —
could such a man-made machine, having no biological components, think? And here
again I think the answer is there is no obstacle whatever in principle to
building a thinking machine, because human beings are thinking machines. If by
“machine” we mean any physical system capable of performing certain functions,
then all human beings are machines, and their brains are sub-machines within
the larger machines, and brains can certainly think. So some machines can
think, namely human and many animal brains, and for that reason the larger
machines — humans and many animals — can think.

But once again this is not the only
question that people are really asking. I think the question they are really
trying to ask is this: Is computation by itself sufficient for thinking? If you
had the machine that had the right inputs and outputs and had computational
processes between, would that be sufficient for thinking? And now we get to the
question: What is meant by “computational processes”? If we interpret this in
the sense that has been made clear by Alan Turing and his successors, where
computation is defined as formal operations performed over binary symbols,
(usually thought of as zeroes and ones but any symbols will do), then for
computation so defined, such processes would not by themselves be sufficient
for thinking. Just having syntactically characterized objects such as zeroes
and ones and a set of formal rules for manipulating them (the program) is not
by itself sufficient for thinking because thinking involves more than just
manipulating symbols, it involves semantic content.

Introduction

Building on the lessons learned by
Winograd (SHRDLU) on virtual robots came actual robots, also based on the
microworlds approach devised by Minsky and Papert (Winograd’s supervisor) in
1970. Shakey was developed at Stanford between 1968 and 1972. Freddy was built
in the Department of Machine Intelligence and Perception at University of
Edinburgh, in Scotland, between 1969 and 1971. They performed the tasks they
were designed to perform, but critics pointed out that although these robots
operated in very simple worlds and performed very limited tasks, they were
incredibly slow. In the meantime, neural network research was considered to be
dead, following the publication of the book Perceptrons, in 1970, which pointed
out its many limitations, and computer translation had already been pronounced
impossible in 1966. Dreyfus’s book, What Computers Can’t Do, was another nail
in the coffin for AI. All this led to a general feeling that AI was going
nowhere, and government funding was severely cut back for a number of years.
This is now known as the first AI Winter. Watching the Lighthill Report will
give you a good idea of the general attitude towards AI in the scientific
community in 1973, and let you understand how funding almost completely dried
up.

When Searle published his Chinese Room
argument in 1980 arguing that computers cannot think, many felt he had hammered
the final nail in the coffin for AI. This does not mean that research stopped
entirely, but in the aftermath of Searle’s paper many researchers no longer
believed that AI could give humanity thinking machines. The history of AI as a
broader project was far from over, however, and hype cycles continue to unfold
today.

Robots are slow

Shakey was a mobile robot that was able to move objects, like
SHRDLU, only in real life. Its world was very simple, with markers designed to
help the computer distinguish between the floor and the wall, high contrast
colours and. A video of Shakey showed the robot impressively manipulating its
world:

Video of Shakey in action:

The video does fail to mention that the
tasks it shows took days for Shakey to complete!

Freddywas a stationary robot with the ability to recognize objects
such as a cup, a ball and a hammer, although it also took several minutes to
identify the object.

Why were these robots so slow?
Combinatorial Explosion is the general name given to the problem of having to
explore too many possible solutions, but it is interesting to see one why
robots in particular faced this difficulty. The Epistemological Frame
Problem
, outlined by Dennett in 1978, is an account of one central reason.
The experience was that robots had trouble updating their beliefs about the
world as they changed the world (it was difficult to know which beliefs had to
be updated). Also, when humans make a decision about what to do, how do we do
it? (the Stanford Encyclopedia of Philosophy entry calls it the computational
aspect of the frame problem) How do we know how to frame the problem –
How do we know what beliefs are relevant and not relevant to solving it? It is
important to note that this problem seems to arise because of the use in
Classical AI of sentence-like representations of the world.

A good way to gauge the reaction of the
scientific community to the unimpressive performance of these state-of-the-art
robots is watching the video of the Lighthill Report.

Perceptrons don’t work

“Perceptrons” was the name given to a
family of neural nets researched by psychologist Frank Rosenblatt between 1957
and 1962. Neural network research had almost died, and Rosenblatt was the only
person keeping it alive. But in 1969, Minsky and Papert published a book of the
same name outlining what they could do, but mostly what they couldn’t do. In
the aftermath of the book’s publication, perceptron research practically
stopped.

What Computers Can’t Do – Dreyfus, 1972

According to Dreyfus, there are a few
assumptions of AI that are dead wrong, and because of this, he argues, AI has
no future. Two crucial ones are its psychological assumption and its
epistemological assumption:

According to the psychological
assumption
, the human mind is a device that manipulates symbols according
to formal rules. (remember what you have learned about the Computational Theory
of Mind, and what it says about what it is to think).

Dreyfus argued that humans don’t think
like that. According to him, we obvioulsy do use symbols, but we use them
against a background of non-symbolic common sense knowledge that is required to
provide meaning to those symbols.

According to the epistemological
assumption
, all knowledge can be formalized (remember again the CTM, and
its view on propositional attitudes).

Dreyfus argued that most human
knowledge is non-symbolic (he explained it as being knowing how vs knowing
that
), and we are able to use it intuitively in some way. How could we
formalize an experienced driver’s thought processes, or those of a skier? They
know how to those they things, but how to express this knowledge in sentences?
He gives the Heideggerian example of hammering a nail. We only think about a
strategy when it goes wrong. He also points out that according to Wittgenstein,
we don’t use language according to strict rules (it is only that the correct
use of the language can be described by those rules), and we haven’t learned it
according to strict rules.

The Physical Symbol System Hypothesis

Before we look at Searle’s paper and
consider its impact, we should look at what it came to criticize – the claim
that physical symbol systems can show human level intelligent behaviour.
A physical symbol system (PSS) is simply any mechanism that processes formal
symbols (i.e. any computer built according to the Von Neumann architecture is a
PSS).

Although the idea that a machine that
works exclusively by processing formal symbols can behave intelligently goes
back at least to Turing, it was only in 1976 that Newell and Simon provided an
explicit formulation about its alleged abilities. This is known as the Physical
Symbol System Hypothesis (PSSH), a claim that can be said to be a natural
conclusion of the Computational Theory of Mind, which we have studied earlier. Since
reasoning, according to the CTM, requires only the manipulation of syntactic
content, we can conclude:

“A physical symbol system has the necessary and sufficient means
for general intelligent action. By general intelligent action we wish to
indicate the same scope of intelligence we see in human action: that in any
real situation behavior appropriate to the ends of the system and adaptive to
the demands of the environment can occur, within some limits of speed and
complexity.” (Newell and Simon)

Winograd’s description of
the hypothesis sheds some light on its significance:

“This physical symbol
system hypothesis presupposes materialism: the claim that all of the observed
properties of intelligent beings can ultimately be explained in terms of lawful
physical processes. It adds the claim that these processes can be described at
a level of abstraction in which all relevant aspects of physical state can be
understood as the encoding of symbol structures and that the activities can be
adequately characterized as systematic application of symbol manipulation
rules.“

This will sound familiar to
you, since we have already discussed the Computational Theory of Mind.

When Winograd designed SHRDLU, he
believed that sophisticated semantics could arise from particular arrangements
of purely formal symbols. He came to change his mind completely. In his 1991 paper Thinking Machines (section 4.1),
he argues: “There are basic limits to what can be
done with symbol manipulation, regardless of how many different, useful ways to
chain things together one invents. The reduction of mind to the interactive sum
of decontextualized fragments is ultimately impossible and misleading.”

Winograd argued at this point that the
project of AI as it was being pursued by people like himself back then was
doomed to fail, because Classical AI research was based upon entirely flawed
assumptions:

“I have now come to
recognize a larger grain of truth in the criticisms than in the enthusiastic
predictions. The source of the difficulties will not be found in the details of
silicon micro-circuits or of Boolean logic, but in a basic philosophy of
patchwork rationalism that has guided the research. […]

It takes an almost
childish leap of faith [a reference to Minsky] to assume that the modes of
explanation that work for the details of block manipulation will be adequate
for understanding conflict, consciousness, genius, and freedom of will. ”

Since not enough was known about the
human brain for early AI researchers to make a fair attempt at replicating it,
they attempted to replicate the mind, or at least the mind as they saw it. And
in doing this they were influenced by philosophy, and in particular by the
rationalist philosophy of Descartes, Leibniz and Hobbes, which, as it turns
out, was probably mistaken.

Readings

Watch the first 5 (of 6) parts of the Lighthill Debate. You may choose to skip
part II, which is not particularly interesting.
Alternatively, you can read the summary in the course notes:

To learn about Searle’s Chinese Room
argument, watch Part III of the BBC documentary here:

http://www.youtube.com/watch?v=4tK8jNVX_4Y

…and continue watching it in Part IV
until it mentions Connectionism, roughly at minute 4:00. You can stop there.

http://www.youtube.com/watch?v=E_q7EXiyZok&feature=relmfu

The Chinese Room and its replies –
Stanford Encyclopedia of Philosophy (see 3. and 4.)

http://plato.stanford.edu/entries/chinese-room/

Daniel Dennett – Cognitive wheels: The
frame problem of AI (Read the first page to understand how the frame problem
affects robots)

http://www.idi.ntnu.no/~gamback/teaching/TDT4138/dennett84.pdf


Additional sources:

Read Terry Winograd’s paper from the beginning up
to and including 4.1.
It will greatly help to understand the problems of the physical symbol system
approach, or at least as Winograd saw them:

http://hci.stanford.edu/~winograd/papers/thinking-machines.html

Recent Posts

Open chat
Hello
Can we help you?