literature

Can a Robot have Intelligence?

Deviation Actions

julietcaesar's avatar
By
Published:
604 Views

Literature Text

Many advocates in the study of artificial intelligence claim that a robot can possess genuine understanding and intelligence provided that it has an appropriate formal program and the right inputs and outputs. In this essay however, I shall argue that this claim is false on the grounds of John Searle's Chinese Room thought experiment, in which syntax is not sufficient for semantics. The robot reply to this argument, along with the claim of chauvinism made against biological naturalism only displays the irrelevancy of each argument to show that a robot can possess genuine understanding and intelligence. It is clear that no matter how the Chinese room is altered, or whether the robot is made of inorganic matter, the fact remains that a robot can only process syntactical information without understanding it and therefore does not have any intrinsic intentionality. I shall conclude that in relation to conclusions derived from the thought experiment, a robot cannot possess genuine understanding and intelligence.

According to Crane, the Chinese Room thought experiment was supposed to show that a formal computer program can "never constitute genuine understanding or thought, since all computers can do is manipulate symbols according to their form." (Crane, The Mechanical Mind, pg 125), thus refuting the stance of 'strong' AI.  The thought experiment proceeds like this: Searle is locked in a room with two slots for the input and the output. From the input slot, he receives meaningless squiggles and with the help of an English rulebook, matches them up with another batch of meaningless squiggles to feed back through the output slot. Unknown to him, he has just answered questions in Chinese, a language he has no understanding of. To native Chinese speakers outside, he appears to have understood and answered the questions appropriately. However, he had no understanding of the Chinese at all; he was just manipulating the squiggles according to the rulebook (the program) he was following in order to produce the answers. From this, it can be derived (Wilkinson, Minds and Bodies, pg 207-208) that:

Axiom 1: Computer programs are formal.

Axiom 2: Human minds have mental contents.

Axiom 3: Syntax by itself is neither constitutive of nor sufficient for semantics.

Conclusion 1: Programs are neither constitutive of nor sufficient for minds.

If we apply this argument to the case of a robot which has a formal program, it would be clear that a robot would not be able to possess genuine understanding. Like a computer, it would manipulate symbols, but would attach no meaning to them. However, as we will see in the next section, advocates of AI would say that robots would be able to possess genuine understanding and intelligence if provided with the right sensory-motor abilities, and not just a formal program with the right inputs and outputs.

The robot reply to Searle's Chinese Room thought experiment first concedes that syntax would not be sufficient for semantics, and a robot would need causal relations with its environment in order for manipulation of formal symbols to gain meaning. Its advocates argue that by placing the digital computer inside the robot's head and implementing sensory apparatus such as a camera for its eyes, a robot would thus be able to understand the syntax or match it with objects in the world and possess genuine understanding and intelligence through use of its sensory inputs and motor outputs. This argument, however, misses the point and does not address the processing of syntactical information by robots. If Searle was put inside the robot's head, and Chinese symbols were coming through the sensory apparatus, the information would still be meaningless because the robot is still manipulating formal symbols in order to give 'instructions' to the robot's arms and legs to move in response. The process of information for the process is purely syntactical, and the Chinese Room thought experiment says that syntax is not sufficient for semantics.  Thus, Searle claims, the robot is only "moving about as a result of its electrical wiring and its program" and therefore has "no intentional states of the relevant type." (Rosenthal, The Nature of Mind, pg 514) Hence, adding sensory-motor capacities to a robot would make no difference whatsoever to its capability to possess genuine understanding and intelligence.

If robots are unable to attribute semantics to syntax, what makes us humans able to do so? According to Searle, the answer is 'biological naturalism'. The human brain, he argues, "does not merely instantiate a formal program, it also causes mental events by virtue of specific neurobiological processes." (Williams, Minds and Bodies, pg 211) Because robots lack these neurobiological processes, robots are thus unable to possess genuine understanding and intelligence. This argument, however, seems inadequate and attracts accusations of chauvinism. Why is biological matter able to have the right causal powers for intentionality, and not inorganic matter like silicon? How does biological matter possess those causal powers in the first place? In her essay, Margaret Boden attempted to bring these issues to light, arguing that intentionality on the basis of neurobiological processes should really be "grounded in empirical discovery" (Heil, Philosophy of Mind, pg 257) rather than our intuitions on the subject of genuine intelligence and understanding.

In response to this claim, Boden's argument is ultimately irrelevant to whether a robot can possess genuine intelligence and understanding. While Searle's explanation for intentionality is dubious at its time of publication, the original Chinese Room thought experiment was not set out to imply that only biologically based systems could possess genuine understanding and intelligence. To clear up this misunderstanding about the thought experiment, he sets out four more axioms and conclusions (Wilkinson, Minds and Bodies, pg 211-212):

Axiom 4: Brains causes minds.

Conclusion 2: Any other system capable of causing minds would have to have causal powers equivalent to those of brains.

Conclusion 3: Any artefact that produced mental phenomena, any artificial brain, would have to be able to duplicate the specific causal powers of brains, and it could not do that just by running a formal program.

Conclusion 4: The way that human brains actually produce mental phenomena cannot be solely in virtue of running a computer program.

In sum, it can be seen that a robot with an appropriate formal program and the right inputs and outputs, as well as sensory-motor capabilities can only process syntactical information without understanding it at all. It does not matter if there are causal connections made with its environment, or whether it is made out of a certain type of material. The fact remains that without the specific causal powers for which the human brain possesses, it is impossible to attribute intentionality to the robot. Therefore, on these grounds, a robot cannot possess genuine understanding and intelligence.

Bibliography

Crane, T. (1995) The Mechanical Mind. Harmondsworth: Penguin Books

Heil, J. (2004) The Philosophy of Mind. New York: Oxford University Press

Rosenthal (ed.) (1991) The Nature of Mind. New York: Oxford University Press

Wilkinson, R. (2000) Minds and Bodies. New York: Routledge
This is the second essay for the special university course I'm doing called Mind and Morality. This time, I was given seven essay questions for this second part of the course, which is about Nature and the Mind, and I picked the seventh one, as seen here:

Could a robot possess genuine understanding and intelligence? Discuss with relation to Searle's Chinese room thought experiment.

I got 77% for this essay, which is a Distinction, and well, I was pretty damn happy even though I just scraped into the Distinction band by 2%. However, it was a significant improvement of 7% compared to my first essay, so I was really happy. But then again, this time I managed my time a bit better, did more thorough reading of the material I was given as well as reading other material from the university library and incorporated some of the tips my lecturer gave me on improving my essay. Like, for example, how the question mentioned robots, not computers as Searle's thought experiment mainly dealt with. I felt I really owed my lecturer as his advice helped me structure this essay and gave me ideas on what to talk about.

My lecturer's comments on this essay:

This is clearly distinction grade essay writing. Your answer is as good or better than many first year essays on this subject. The best parts of the essay are that you answer the question specifically about robots (rather than just computers) and that you integrate points of view from different philosophers, carefully referencing to support your argument. To improve this essay you could make sure to quote material from its source rather than a commentary (i.e quote Searle from Searle's own work). At your level you can easily work from primary sources. Really good work!

Well, I guess four days spending six hours on the essay, including on the day it was due (luckily since it was an athletics carnival I didn't really have to go to for school), which I finished off just two hours before I was due in the lecture really did pay off. ^^; Now I have a 2000 word essay left to write on ethics. :faint:

Word Count: 1106 words, but is variable due to some editing of this paper prior to posting, taking in account some of the edits my lecturer made on the actual paper itself. The absolute limit was 1200 words for this paper.

Special Mention: Thanks !KneelingGlory for reading it through before I handed it in. :heart:

Comments would be wonderful. :)
© 2010 - 2024 julietcaesar
Comments14
Join the community to add your comment. Already a deviant? Log In
PaperDart's avatar
:star::star::star::star-half::star-empty: Overall
:star::star::star::star-half::star-empty: Vision
:star::star::star::star-half::star-empty: Originality
:star::star::star::star-half::star-empty: Technique
:star::star::star::star-half::star-empty: Impact

Hey Rachel

So I found this delicious piece of academic writing in my message centre – on an awesome topic, too – and devoured it happily. I started nitpicking before I got to the generic appreciative comment though, and decided that I might as well make this a full on critique. The stars are completely random, because I can't make them apply to this kind of writing.

I'll begin by considering the structure of the essay. You've introduced the concept of Searle's Chinese room. Then you show us two philosophies that oppose the concept. You show each of those to be irrelevant, leaving your original Searle-based decision standing and draw your conclusion from there. That's an okay structure, although it's a little shaky on a couple of points. The one that immediately stood out to me is that you've only responded to two of an unknown number of arguments. After some thought, I assume that those two arguments encompass the majority, if not all, of the opposition to your view. It's worth mentioning that, though. It's important to show that you've been comprehensive. The other consideration I have about the structure is your balance of positive and negative content. That is to say, you spent much more time refuting opposing claims than making claims in support of your own argument. That's not necessarily a bad technique, but it does require that you begin by explaining why your position is, by default, correct. You have done that to some extent, but my feeling is that it could have been done a little more definitively. My general impression is that the structure served the essay in getting it from A to B, but that it wasn't as crisp and clear as it might've been if the stages of the argument had been a little more clearly defined by paragraphing and connecting statements.

You might also have made your argument slightly crisper by defining the words you used more carefully. You have a large, well handled vocabulary, but academic writing usually requires more definite definitions than other writing or speech. Standards vary from institution to institution, but in general I'd recommend defining the key terms of your essay in the introduction. A big one that stood out for me was 'axiom'. Crane apparently derives axioms from Seale. Axioms are usually defined as statements that are accepted without derivation (essentially because there is no derivation). Either you mean something different from the norm (as I perceive it) by 'axiom' or by 'derive'. My gut feel is that you're using 'derive' a little more loosely than I'd expect in a fairly scientific paper.

There are a few other words that I didn't feel were entirely defined. They didn't remove the meaning and argument of the essay as a whole, but they did make some sections weaker. Since an argument is such a carefully created structure, one weak element can bring the whole thing perilously close to crashing down! The following are terms that I think the essay would've benefitted from having defined in context: 'formal', 'syntax', 'semantics', 'biological naturalism' (this is used in your introduction, but only explained further on), 'intentionality', 'understanding', 'intelligence'. If some of those have a common definition is ethics, please ignore me – I'm much better versed in the Engineering/Computer Science angle on this topic. You've used the words consistently. Having read the entire essay, I could even make a decent guess at the definitions you had in mind for many of those terms. There's just a little bit of a delay in communicating some vital information to the reader, and some information that you never give.

Having nitpicked, I'll reiterate that I very much enjoyed the read. You have the academic writing style pretty well down, and your argument was solid, if not absolutely crystalline. The essay succeeded in being interesting without losing the accuracy and (relative) objectivity that characterise academic writing. I'm half-inclined to start debating genetic programming now. <img src="e.deviantart.net/emoticons/g/g…" width="17" height="15" alt=":giggle:" title="Giggle"/> However, I will instead conclude by thanking you for the read.

I hope this is somewhat helpful; please feel free to discuss or ignore anything you disagree with. <img src="e.deviantart.net/emoticons/s/s…" width="15" height="15" alt=":)" title=":) (Smile)"/>