Showing posts with label prove-consciousness. Show all posts
Showing posts with label prove-consciousness. Show all posts

Wednesday, February 3, 2010

Why behavioural and computational theories are not enough - Searle's paper continued

He talks about other aspects of consciousness, such as intentionality (we can think of things that refer to the world), familiarity (we have preconceived notions of things and use them to categorize things) and mood. I think these things are perhaps aspects to the whole human conscious mind but they tie into consciousness rather than being a key part of it

The key part of it I think is the subjectivity. But this aspect riddles us and our current mode of science has no such notion

Some Common Mistakes about Consciousness

I would like to think that everything I have said so far is just a form of common sense. However, I have to report, from the battlefronts as it were, that the approach I am advocating to the study of consciousness is by no means universally accepted in cognitive science nor even neurobiology. Indeed, until quite recently many workers in cognitive science and neurobiology regarded the study of consciousness as somehow out of bounds for their disciplines. They thought that it was beyond the reach of science to explain why warm things feel warm to us or why red things look red to us. I think, on the contrary, that it is precisely the task of neurobiology to explain these and other questions about consciousness. Why would anyone think otherwise? Well, there are complex historical reasons, going back at least to the seventeenth century, why people thought that consciousness was not part of the material world. A kind of residual dualism prevented people from treating consciousness as a biological phenomenon like any other. However, I am not now going to attempt to trace this history. 

Instead I am going to point out some common mistakes that occur when people refuse to address consciousness on its own terms.  The characteristic mistake in the study of consciousness is to ignore its essential subjectivity and to try to treat it as if it were an objective third person phenomenon. Instead of recognizing that consciousness is essentially a subjective, qualitative phenomenon ... 

The two most common mistakes about consciousness are to suppose that it can be analysed behavioristically or computationally. The Turing test disposes us to make precisely these two mistakes, the mistake of behaviorism and the mistake of computationalism. It leads us to suppose that for a system to be conscious, it is both necessary and sufficient that it has the right computer program or set of programs with the right inputs and outputs. ... A traditional objection to behaviorism was that behaviorism could not be right because a system could behave as if it were conscious without actually being conscious.  There is no logical connection, no necessary connection between inner, subjective, qualitative mental states and external, publicly observable behavior. Of course, in actual fact, conscious states characteristically cause behavior. But the behavior that they cause has to be distinguished from the states themselves.

The same mistake is repeated by computational accounts of consciousness. Just as behavior by itself is not sufficient for consciousness, so computational models of consciousness are not sufficient by themselves for consciousness.  There is a simple demonstration that the computational model of consciousness is not sufficient for consciousness. I have given it many times before so I will not dwell on it here. Its point is simply this: Computation is defined syntactically. It is defined in terms of the manipulation of symbols. But the syntax by itself can never be sufficient for the sort of contents that characteristically go with conscious thoughts. Just having zeros and ones by themselves is insufficient to guarantee mental content, conscious or unconscious.  This argument is sometimes called `the Chinese room argument' because I originally illustrated the point with the example of the person who goes through the computational steps for answering questions in Chinese but does not thereby acquire any understanding of Chinese.[1]

The point of the parable is clear but it is usually neglected. Syntax by itself is not sufficient for semantic content. In all of the attacks on the Chinese room argument, I have never seen anyone come out baldly and say they think that syntax is sufficient for semantic content. 



Not sure about the syntax/semantic distinction there but maybe it has some bearing

We don't even know how to describe subjectiveness in science do we?

Tuesday, December 15, 2009

Can we imply consciousness through functional equivalence?

I was chatting to mark tonight about the consciousness stuff.  I ended with idea that the subjective experiences seem unable to be proven, but what if we could show two minds to be equivalent, by way of modelling their execution or function.

Mark said that we can't even do that for two humans, but we especially can't do that between a human mind, and an artificial mind, because an artificial mind had an intelligent designer, that is, it's function is completely determined, whereas we cannot know if we have fully captured the functioning of the human mind - there might be some aspect that we have missed, and we would never know - perhaps this missed aspect is significant in making the two systems unequivalent.

I said, well what about "for all practical purposes" , I mean, we are able to explain things at a molecular level, like ATP production or photosynthesis, and we could just use occam's razor and assume that the simplest working model is good enough.  I know this still doesn't preclude that perhaps there is a process that we aren't aware of, that is responsible for this subjective experience we are trying to pinpoint.

But the fact that Mark went so far as to say that you can't assume two human minds are equivalent, annoyed me.  He also started making some point about how perhaps I think they are equivalent because the alternative, ie. accusing someone of not being sentient, is undesirable.

I started getting pissed off at him, and didn't want to continue the argument. I felt that he wasn't taking my arguments seriously and at times completely missing the point I was trying to make.

It is not helpful to get emotional when arguing.  I need to practice not getting emotional, and instead dealing with the thoughts and ideas that come up, and see where they lead.

One thing I did find interesting about what Mark said - he said something about  how we seem to only require a theory when there is some deviation that needs to be explained.  Eg. we assume that other people are sentient   as well because there is no reason not to?  something like that