Oops, I just replied to this using my wife's account by mistake again. Hope I didn't screw things up. Here's the same item (I saved before sending!) under my name:
--- In Wittrs@yaho..., Neil Rickert <xznwrjnk-evca@...> wrote:
> SWMirsky@ wrote:
> > Neil (2): My inclination is to say that a bee is aware, but to be
> > skeptical as to
> > whether it is conscious.
> > Me (2): Here you clearly distinguish between "awareness" and
> > "consciousness" which I have concluded is not correct. Thus, I think
> > it's now incumbent on you to say what the difference is.
> We have to keep in mind that both "consciousness" and "awareness" are
> vague terms. They are not specific enough that we can identify
> empirical criteria that would say whether a bee is aware, or whether a
> bee is conscious. The meaning of these terms may change, over time, as
> we come to better understand cognition. So, if you like, I as saying
> that the meanings are more likely to change enough for people to say
> that the bee is aware, than they are to say that the bee is conscious.
> It's my impression that, at this time, there are many people who would
> deny that a dog is conscious, but there are few who would deny that a
> dog is aware.
Yes, but the issue is that I began this by making the point that I think what we generally mean by "conscious" is "aware", i.e., that that is the common thread in our uses of "conscious" and, while "aware" is also vague, as you put it, so are most terms, in the family way. My point is that something is gained by resorting to the characterization of "awareness" because this tells us something more than "conscious". Since you agree that it looks like people may be more likely to grant awareness than consciousness to a dog (I think that's odd myself) you are saying that there is a difference in terms of our usages for many of us and that "awareness" adds something to the meaning of the first term.
So what do you think is the difference between calling something "conscious" and calling it "aware"? It can't just be that people will use it differently. The question is what you think is denoted that is different?
My point, of course, was to suggest that, on observation, there is no difference, that what we mean by one IS what we mean by the other. Since you think otherwise, what is the difference here?
> (on automobile awareness)
> > Me (2): No, because any physical thing will respond in certain ways when
> > certain forces are applied to it in certain ways. Although these
> > responses may not always be predictable to us in the circumstance, they
> > are at least in principle predictable if we have enough information
> > about the pure physical forces. But living things with at least a
> > certain level of awareness will have the capacity to direct their own
> > behavior in response to some stimulus (even if we have the ability to
> > predict some or perhaps even all of the responses -- in THAT case we are
> > predicting based on what we have learned of these entity's dispositions
> > to respond, not on what we know of the physics as is the case with an
> > automobile).
> It seems to me that you are making a clear distinction between living
> things and mechanistic things. I agree with this, but it seems to be a
> change from some of what you had posted in earlier threads.
No, it's the same. It's just that I recognize a continuum here and so am not sure it is meaningful to say "here is the dividing line, on one side there is consciousness, on the other not". At the borders I think things blend into one another and we only recognize the distinctions as we move further away in either direction.
> > Neil (2):I would say that your cat is thinking. Of course, the cat does
> > not have a language, and so won't be thinking in the way that we do. But,
> > in a sense, it probably is thinking about getting ready to pounce. That
> > goes back to a comment in an earlier post, where I somewhat agreed with
> > the behaviorist view that thinking is behavior, and that thinking is
> > about motor actions. In our casethe motor actions involved in speech are
> > particularly important to us, and we tend to overlook the motor
> > component of that.
> > Me (2): Here it seems to me we get into competing characterizations. Is
> > my cat thinking? Well I think it's pretty clear she has awareness at
> > least.
> I would say that she is aware of things in the world. She also has some
> degree of self-awareness. In particular, she is aware of her readiness
> to act (as in pouncing on the bird).
Ah, I would agree. I would go further and say, being aware she is conscious (because that's all being conscious entails). But I would not consider it thinking in the sense of following a line of ideas from one to the next, etc. But if being aware is to think, then I would agree with your interpretation. So here I am making a distinction you seem unprepared to make. It doesn't matter, of course, as long as we recognize what we each mean in such cases.
> > But just having awareness need not constitute thinking. A snail
> > crawling in my garden has awareness since, if I prod it with a twig, it
> > withdraws into its shell.
> I would be careful about saying that. It seems awfully close to
> identifying awareness with reactivity, and you had just denied that a
> few paragraphs back.
I added the criterion of autonomous action though (which you questioned when I introduced it). The snail doesn't move into its shell because I push it into it. It recoils from the contact I initiate and part of that involves behavior that puts it back in its shell. It's by no means clear that that's just physics, even if the behavior doesn't violate the so-called physical laws (the snail doesn't transport itself back into the shell, it moves in a perfectly physical way).
> If you are going to say a snail has awareness,
> then will you also say that an amoeba has awareness - even though it has
> no neurons?
Or a plant with photosynthesis that follows the light. Remember my point about the continuum of behaviors and organisms. At some point along it, we no longer want to say, ah there is awareness there. But at what point? I don't think there is a distinct demarcation line but only a gradual blending and that the distinctions are best recognized at a distance from one another.
> > Is
> > the fact that my cat observes a bird enough to suppose that, as the cat
> > sits observing, it is having thoughts about that bird and what it might like
> > to do to it?
> No, it isn't just the observing. Rather, while observing, the cat seems
> to tense its muscles as if planning to pounce, and seems to be
> attempting to judge the time and force needed.
I'd be careful about saying "judge" but yes, the cat has awareness of both the bird and its own physical movements and responses even if 1) it cannot objectify them into the kinds of representations we use all the time. One is moved to think that not only can it not talk about such things, it cannot conceptualize them either. Having concepts seems to be dependent on language or at least on something very like having language capacity. Perhaps this says something about language, perhaps about concepts.
> That seems to fit my
> view of what thinking amounts to.
So on your view, thinking is just a certain level of awareness. Okay, I understand that. While I disagree about the characterization you employ here, I think we are speaking of the same thing. We're just using different terms since I would not call being aware, thinking. Now the more important question may be whether these two ideas are linked or whether it's just a matter of how we're each using the word. Would a computer capable of representing at the level of the cat, say, be thinking on your view? Such a machine may be aware of movement in its visual field, be able to distinguish between different images in that field, be able to marshal its motor resources and yet be aware of limits in those resources such that, rather than "jumping" at the image in its field, elects to remain immobile. One can imagine a fairly sophisticated machine with such integrated sensor and motor initiation capacities. Is that enough to be said to be thinking?
If you say yes, I might say you are setting a low threshold but I will understand you and agree that that might be on the same continuum of what I would call thinking in my own lexicon. Certainly there's no reason to think that such a mechanical view of things precludes ascribing thinking to a device or entity or that it means we can't impute awareness to it at some level or, therefore, consciousness.
My point, of course, is to say there is nothing uniquely special about consciousness on this view, that it's just a particular array of capacities and operations and so can be replicated on a machine platform no less than on an organic one -- at least in theory. (It still may be the case that to achieve the full integration of these functions one needs something that is uniquely found on an organic base like brains, though that, I submit, would be an empirical and not a conceptual question.)
> > Me (2): One man's "handwaving" may just be another's missed point. I am
> > not saying it is necessarily easy to make things like this clear in
> > language. In fact I'm inclined to think it's not though I keep trying.
> > But it seems to me that the point of philosophy must be to try, if there
> > are difficult thoughts to be explicated. On the other hand, accusations
> > of "handwaving" are the easiest thing in the world to make.
> It was a comment, not an accusation. Your account of awareness does not
> give me anything that I could test empirically. And you might say the
> same about my version of what awareness is.
No it does not because it is a conceptual account. It relies on understanding our usages in cases where consciousness is ascribed. Must conceptual points be thought of as "handwaving"? If so, then isn't all philosophy no more than that?
My sense of "handwaving" is not that it represents unempirical claims but that it involves resort to vagueness masked by lots of words, noise, that is intended to conceal the distinctions being made so we may get away with claims that might not otherwise stand up to scrutiny. It seems to me conceptual claims are certainly subject to scrutiny insofar as they depend on giving a clear and reliable account of the usages we are invoking. Above I explained why I think it makes sense to equate consciousness with awareness and why I draw certain lines (between awareness and thinking, say). While this is not a matter of handing you empirical tests it is to depend on our common understanding of the usages we each resort to in this kind of discussion.
> >> Neil: But what does "merely physical" mean here?
> >> Me: I thought I made that pretty clear!
> > Neil (2): It's not clear to me. If "physical" means "composed of atoms"
> > then we
> > are not physical.
> > Me (2): Of course I didn't say that, did I?
> Well that make it significantly clearer than it was. It's the sort of
> thing I was looking for when I asked what "merely physical" might mean.
> > Me (2): I wasn't alluding either to idealism or realism. You brought
> > these isms into this. As to lack of distinguishability, I think it's
> > pretty clearthat idealism is distinguishable from physicalism in the
> > metaphysical sense.
> My comment was intended to be on difficulty of distinguishing them
Ah, okay. As I have said in the past, I think the whole point of metaphysical claims is to cover all empirical bases and so there is no way to make such distinctions. Insofar as empirical tests exist, such questions are no longer metaphysical. Insofar as they remain metaphysical, they are pointless to argue about.
> > Neil (2): I am typing, and forming representations, because to do so is
> > useful to me. It is not useful to the computer. The computer just
> > follows mechanical rules that it applies to the representations it
> > receives, with no consideration of usefulness.
> > Me (2): In this you give a teleological account that already implies
> > awareness, i.e., usefulness is a concept that, to be intentionally
> > implemented, must already presume awareness of things (of objects and
> > objectives and of what could secure either of these for the
> > implementer).
> I was not intending anything teleological.
> > Granted living organisms pursue self-preservation and
> > self-perpetuation and don't need to be aware they are doing that.
> That's all I need to talk of usefulness. I see that as a sufficient
> starting point from which awareness can grow.
But the issue is how because we still want to know what it means to be aware. Recall my point is not that some organisms have awareness at various levels and some may not. It's rather to ask what awareness is. If it is some irreducible property ascribable to some physical events (or objects) and not others, then we have no account, just a claim of existence in some form or other. On the other hand, if we can explain it as a function of certain underlying processes when brought together that are not, themselves, aware then we have an account which explains what it means to be conscious. Dennett wants to say that it is certain kinds of computational processes. Edelman tells us that, yes, it is processes but they must be organically generated only because of the brain's uniquely complex morphology. Hawkins, in his work on intelligence, suggests (though he isn't speaking about awareness per se) that it will be certain relatively simple algorithms at work that operate differently than the computationally complex kind we find in computers. But all three agree that it is a kind of "system quality" a la Minsky rather than some special and ontologically distinct property of some kinds of matter that just happen to come into existence under certain conditions.
There is an important conceptual difference to be identified and understood here, I think.
> > Neil (2): Note that this is what got me kicked of the ai-philosophy
> > group. My position is that we are not receivers and processors of
> > representations. Rather, we are creators of representations. And that's
> > what I see as making AI problematic.
> > Me (2): How can we be creators if we don't already operate in a milieu
> > of representations? Can we create what is not part of what we are?
> I'm not sure that "milieu" is quite the right word there, but I'll use
> it. Roughly speaking, we have to create that milieu for ourselves.
Isn't this confusing levels of operation? Sure we create our own milieus in one sense. But we don't create the milieu in which we operate consciously because we must first have whatever that milieu is to be conscious.
> I think it is part of what we are, to be able to create that.
> To put that in perspective, we see science creating such a milieu of
> scientific representations, and then creating representations within
> that milieu. And "milieu" is a reasonable word for that with science,
> since that is very much a social activity. But infants must do it
> individually (so "milieu" is not quite the right word) before they can
> even be aware that we are part of a society.
Do infants create the world around them or are they not enrolled into it at various levels of operation? Do they not gradually build up the brain functions needed to capture and retain order out of the chaos of their sensations and then the systems (including a language system) to participate in a community of other subjects in that world? Does it really make sense to say the infant is creating all this? And, supposing it does, to what extent is that "creating" like what we mean when we speak of an artist creating an artistic artifact? Isn't "creating" a little misleading here?