pg0805a10

Explore some of the issues surrounding the attribution of consciousness to machines and to non-human animals.

 

On what basis would we attribute consciousness to anything at all?

We attribute consciousness (a mind) to other people based on inductive inferences from the observable behaviour of other people. (See my response to the previous essay question above.) So it must be the case that if we are to attribute consciousness (a mind) to non-human animals or to machines, it would have to be justified on the same basis.

(1) Do we have sufficient evidentiary justification to attribute consciousness (a mind) to non-human animals? (I will take it as given that we do not currently have sufficient evidentiary justification to attribute consciousness to machines.)

When I attribute consciousness (and a mind) to other people, I am basing my inductive inference on a lot of evidence that other people look a lot like I do, behave a lot like I would behave in similar situations, and use language a lot like I would use language in similar situations. Add to this the observation that attributing to other people the same kind of mental existence as I have is a cost effective approach to managing my relations with them. Together, I judge that these multiple lines of support provide sufficient justification for my attribution of consciousness to other people.

Gathering sufficient evidence to justify the same kind of inductive inference about the consciousness and mind of other animals, however, faces a couple of serious challenges.

The first is that no other animal but Man uses language in the same way that I do. Or, at least, no other animal uses a language I can understand, in the same way that I do. Even if another animal might use language as a tool of mind in the same way that I do, if I cannot understand what they are saying, I cannot notice the similarity or use that similarity to justify the inference that there is a mind behind the language use. Hence, I cannot use the similarity of language use as a basis from which to note a similarity in the kinds of things talked about, or draw inferences about the cause of the talking.

The second is that there really is not a great deal of similarity between the kinds of behaviour that animals display and the kinds of behaviour that I would display in similar circumstances. I cannot, therefore, easily use the behaviour of animals as justification for an inference that their behaviour is driven by the same kind of mental life as mine.

What evidence I can gather is therefore quite limited in scope. But I can and do gather quite a bit of it none the less. I have a pet dog named Sarah. She is an eight year old Yellow Labrador Retriever. She displays behaviour that can be easily characterized as indicative of pain, hunger, pleasure, fear, and so forth. I cannot be certain, of course, that she actually does feel these feelings. She could just as easily be a philosopher’s automaton. But she yelps in situations where I would yelp in pain. She snaps at and gobbles food like I would if I were hungry. She reacts to changes in her environment the way I might in her place. And uses her paws to hold her toys the way I might if I had paws. And so forth. So it is an easy step to explain and predict her behaviour on the basis of assuming that she has consciousness and a mind.

But for all that she is smart for a dog, she is also quite stupid. She obviously has a very limited memory, does not form concepts, and does not reason. It takes a lot of repetition of a lot of minute detail to train her to do the simplest of “tricks”. Her powers of generalization are limited at best, if not completely non-existent. She can’t even figure out that she has walked on the wrong side of a tree and got her leash snagged, despite the fact that she does it quite frequently. So if she does have consciousness and a mind of a sort, it is clearly not the same sort of consciousness and mind that I have, or that I attribute to other people. There are far too many circumstances where understanding and predicting her behaviour is easier on the basis of assuming they are pre-programmed automatic responses, rather than the product of a conscious mind.

And all that when Sarah is “smart” for an animal. Start looking at other species, and the problems of attributing consciousness and a mind escalate quickly. On the overall scale of living things, the species that are considered “higher”amount to a vanishingly small fraction of all animal species. And they are only considered “higher” because they tend to react (for whatever reasons) the way that we would in similar circumstances. We say to ourselves that we “understand” why they do what they do, because we would do something similar in similar circumstances. Without any real justification, we attribute to them the same “why” as we would have under the circumstances. Sometimes, as with my Sarah, this attribution of a mental life is simply a pragmatic convenience in trying to predict how she will behave. But mostly, it is just the result of our over-application of the principles that allow us to attribute conscious and a mind to other people (who are, after all, animals too).

(2) Would (and/or should) the moral status of animals be influenced by the extent, type, degree, nature, or kind of consciousness attributed to them?

This is the question that activates most of the animal rights controversies. How one answers this question will depend intimately on one’s standards of moral right. My own personal answer is — no! But I will leave defending that answer to a later essay when ethics is the topic.

(3) What happens if/when we have as much evidentiary justification to attribute the same kind of consciousness (a mind) to a computer as we currently attribute to other people?

If Robbie looks like me, behaves much like I would under similar circumstances, and uses language to describe and express reasons, concepts, thoughts, feelings, and desires in the same way that I would under similar circumstances, then I have sufficient justification to infer that he has a mental life similar to mine. The question now becomes, how far can we relax these fields of evidence before we must admit that the inference is no longer sufficiently justified.

I argued above, when discussing the consciousness of animals, that the absence of the language field, and a paucity in the behaviour field renders the inference insufficiently justified. But when it comes to computers (or machines in general), the limitations will be in different areas.

Clearly, if Robbie is a computer it will almost certainly not look like me. And even if it did appear superficially similar (an android, perhaps), it would not “look like me”if I were to examine more closely. However, if a computer passes the “Turing Test”, then it will have demonstrated that it can use language to describe and express reasons, concepts, thoughts, feelings, and desires in the same way that I would under similar circumstances. So if Robbie is a computer, then it would provide just as much evidentiary support in the language field as would any other person. Is that alone sufficient to justify an inference that it has a mental life similar to mine?

I’ll come back to that question in a minute. First I want to explore the matter of behaviour. Some people will argue that it is conceivable that a computer could develop (or be given) the capacity to pass the Turing Test without also being able to “behave” in a material sense. I would argue that this is not in fact possible.

I can conceive of only two ways that a computer could use language to describe and express reasons, concepts, thoughts, feelings, and desires in the same way that I would under similar circumstances. One is for the computer to be programmed to do so. And the other is for the computer to be programmed to learn how to do so. In the first case, the programming would have to be incredibly diverse, complex, and detailed. I would argue that in order to pass a Turing Test, the computer would have to have acquired as much “common sense knowledge” as an average person. Otherwise, it will be quite obvious which correspondent is the computer and which is the person. The amount of “common sense knowledge” that the average 5 year old has acquired by that age is phenomenal. There has been an Artificial Intelligence project in place at Carnegie Mellon University for over 20 years that has been trying to codify that knowledge — so far with only limited success. I would suggest that the project will never be completed.

The simpler approach is to program the computer to be able to learn, and to acquire that “common sense knowledge” on its own. But in order to do so, the computer must become an active, exploring, self-preserving agent interacting with a dynamic environment. Even given the stipulation that the computer is programmed to “learn like me”, it will not learn the same sorts of things that I have learned, if it has not interacted with its environment in similar ways. So I would argue that for the computer to use language in a manner similar to the way I use it (and thereby justify my inference that he has a mind like mine), the computer would also have to be able to respond to a dynamic environment in ways similar to the way I would respond. Which means that Robbie the computer would have to behave in ways I would find quite similar to the way I would behave under the same circumstances.

So I would suggest that if Robbie the computer is indistinguishable from another person when examining only his use of language, he would also be (or have to have been at some point in his development) indistinguishable from another person when examining his behaviour. Therefore, I would argue that any computer that can successfully pass the Turing Test would not only use language like I do, but would also behave like I do. In other words, passing the Turing Test would demonstrate two of the three “similarities” that I use to justify my inference that other people have minds.

Other people might disagree, but personally, I would find that sufficient justification to warrant the inductive inference that Robbie the computer has consciousness and a mind. I consider the field of “looks like me” to be of minor importance. As a materialist, I would argue that there is no function that my physical plant undertakes that might not just as well be undertaken by some other foundation than bio-chemistry.

(4) Should the moral status of machines be influenced by the extent, type, degree, nature, or kind of consciousness attributed to them?

How one answers this question will also depend intimately on one’s standards of moral right. My own personal answer is — no! But I will leave defending that answer to a later essay when ethics is the topic.

[Up] [Home] [Next]

Scroll to Top