# I have a question

Discussion in 'General Science & Technology' started by Redd, Apr 17, 2012.

Not open for further replies.
1. ### YazataValued Senior Member

Messages:
2,472
It's probably not just a matter of "critical mass". The generation of phenomenal (self-aware experienced) consciousness is something a lot more sophisticated than that, a process (or a whole collection of related and nested processes) that the brain performs.

Even simple animals have some rudimentary awareness of their environment. They react to environmental conditions, to food, to enemies and to the opposite sex (assuming that their species has sex).

As nervous systems evolved and grew more complicated, animals didn't just react to their environment any longer. They acquired the ability to learn, to remember, even to imagine things that don't exist and hence to plan future activities.

So now animals are picturing situations where one of the actors is "me", their own biological organism. They start to develop a sense of their own self.

And what's more, that new "me" isn't just aware of things that are happening outside its physical body. It's also aware of memories, future plans and evaluative emotions that are being generated within the neural network as well.

So at some point, an animal is going to get an idea that its "me" isn't simply its biological body at all, but something much more mysterious... something that rides around inside its head or someplace and is aware of its physical body just as it's aware of rocks and trees outside. The idea arises that the inner subjective "me" is some kind of transcendental 'spiritual' principle and the idea of a supernatural soul and the beginnings of philosophical idealism arise.

My guess is that the explicit development of that idea was probably associated with the appearance of conceptual language robust enough to model the new philosophical concepts. But the sense of "me", even a vague sense of 'me' being something mysterious and transcendental, has probably been there for longer than humans have been human.

2. ### CyperiumI'm always meValued Senior Member

Messages:
2,972
We say that they have awareness of their surroundings because they react to it, but we could make a robot that reacts to the surroundings without any sense of awareness of it. Because of that we can't equate the reaction to the awareness, for all we know we could be the only species that have subjective awareness the way we know it as.

That's not to say that I don't believe that animals are aware, I do believe that, I'm just saying that we can't tell if they are just by their reactions to the world since such a system could be constructed without involving subjective awareness - which has been proven with robots, which (as far as we know) aren't aware.

In fact, if we equate the reaction to the outside as being aware of it then everything is aware (which is actually one of the theories of consciousness) since everything in one way or another reacts to their surroundings.

Let's also add that it's disappointing to see that I'm the only one with the opposing view - in this thread of course.

Last edited: Apr 22, 2012
3. ### CarcanoValued Senior Member

Messages:
6,805
It doesnt seem logically possible for a mechanical process to create anything other than another mechanical process.

Last edited: Apr 23, 2012
4. ### Fraggle RockerModerator

Messages:
22,693
Uh... "huge"? Yes, theoretically. As Billy points out, that's what the Butterfly Effect is all about. But in practice, how often does a tiny variation create the absolutely perfect cascade of sequential changes to cause a huge difference in outcome... by pure chance? Once in a lifetime? Once in a millennium? Once in the existence of the universe?

Sure, we observe situations that we attribute to the Butterfly Effect all the time, but that's not a proper use of the term. We're talking about the actions of people affecting future events, but none of those is a tiny variation.

My girlfriend broke up with me after I had already bought tickets to take her to see the Blue Oyster Cult. The next girl I dated didn't like BOC but she happened to have promised to fix her friend up with a blind date and I went along for the ride. I saw a BOC album on her coffee table and asked her if she wanted to go to the concert and she said "sure." Two years later we got married and after another 34 years we still are. But none of those events would have gone awry due to a truly tiny variation. I could have bought the tickets two weeks earlier or two weeks later. We could have gone to see this lady two weeks earlier or two weeks later. Since these two ladies were friends I would probably have met her anyway. Etc., etc.
Hey, we build machinery to simulate organic processes all the time. That doesn't gainsay the reality of the organic process.
Since we don't really have a good definition of "awareness," the grounds for this discussion seem a bit shaky. If we build a robot that acts like it's aware (perhaps a special version of the Turing Test is required), how are we to judge whether it's real or an incredible simulation?
Read Code of the Lifemaker by James P. Hogan. Human explorers encounter a civilization of mechanical beings who are sure they evolved through random accidents of bits of matter running into each other. Eventually they learned how to build what we call organic matter, and used very primitive muscle, bone, nerve, etc., tissue to fashion the technology of their world.

The humans assume that the machines are leftovers from a civilization of organic beings who died out, perhaps in a war. And of course the machines feel the same way about us.

5. ### ReddRegistered Member

Messages:
13

Carcano i had to drop it and walk away after the free will debacle, - i had two assignments to hand in and i couldnt deal with the concept lol.

good to see more info here, ill trawl through it when i get a chance.

6. ### Billy TPlease use Sugar Cane Alcohol FuelModerator

Messages:
20,156
As Fraggle points out in end of post 44 and I think it universally true, that the really important things that change your life dramatically always have some "pure chance" factor in their causation.

Accepting this one can wonder why struggle, study and plan, but Firestone gave a good answer to that. You may (or may not) know that after years of working with native latex, he accidentally dropped a glob on his hot pot belly stove and discovered the "vulcanization process" for making latex into useful rubber.

Some years later when asked by a reporter how he discovered vulcanization, Firestone told the reporter, who then exclaimed: "So it was not careful research - it was pure chance." Firestone replied: "Yes, but chance favors a prepared mind."

7. ### CyperiumI'm always meValued Senior Member

Messages:
2,972
Then it should seem logically possible to observe that process if it was mechanical. How come then that we cannot observe consciousness but only the mechanical processes behind it?

How come consciousness can only be shown through testimony if it is mechanical? That proves that consciousness must be something else than the mechanical. I know it's hard to see, but that's the way it is.

The thing is that consciousness is self-evident and there is no evidence by anything external to us. There is already much support that consciousness is more than the parts that make it up, a emergent property of a complex system.

Fraggle Rocker:
I'm just saying that since we can make a machine (robot) that reacts to the environment then we cannot say that just because something reacts to the environment then it must be aware. I didn't say anything about the inverse (that since machines aren't aware everything that reacts to the environment cannot be aware - that would be a fallacy of some kind).

In other words; There are at least one example where reaction to the environment (as far as we know) doesn't give rise to awareness of it, therefor we can't say for sure that a organism that reacts to the environment must be aware (even though it could be).

Last edited: Apr 23, 2012
8. ### Billy TPlease use Sugar Cane Alcohol FuelModerator

Messages:
20,156
This rapidly becomes a semantic game as there are two very different ideas are hiding in awareness.

Consider, for example, a thermostat turning the furnace on and off. It must be aware of the room temperature to do that job but is not even aware of its own actions in the other sense of awareness, which I prefer to call "consciousness of."

This, however, leads to confusion too as what humans mean by consciousness is not just "consciousness of" your own state as many simple machines have that. I.e. any machine that "knows" it is in state B, so it is to respond to external event E by action b but when in state A, is to respond to E with action a. Perhaps it has internal logic that states: if action a or b is done twice, then change internal consciousness state to the alternative one.

As Churchland said: "Consciousness is the hard problem." He was speaking of how to explain it being achieved by nerves firing etc. but not a set of transistors flipping states. I think just defining what it is (in the sense that humans claim when alert to be "conscious") is quite a hard problem. I don´t know of any definition that would exclude all possible machines (if that is desired). I think any such definition of consciousness, is hollow as it will refer to other things impossible to define in the definition of that machine excluding definition of consciousness - specifically stating something like: "Consciousness is a state of self awareness with qualia production and usually some environmental awareness too."

Last edited: Apr 23, 2012
9. ### CyperiumI'm always meValued Senior Member

Messages:
2,972
In the context of this thread I think that conscious awareness should be preferred.

Wikipedia - Awareness, states that;
Awareness is the state or ability to perceive, to feel, or to be conscious of events, objects or sensory patterns. In this level of consciousness, sense data can be confirmed by an observer without necessarily implying understanding. More broadly, it is the state or quality of being aware of something. In biological psychology, awareness is defined as a human's or an animal's perception and cognitive reaction to a condition or event.´´

Conscious awareness is also what people in general associate with awareness (many probably don't even know that there are alternative interpretations).

Well, there are theories out there that everything is aware to a certain extent, at least anything that reacts to the environment, but it isn't the prevailing theory.

Rather the prevailing theory suggest that awareness and consciousness requires at least a minimum complexity in order to emerge (where a simple thermostat probably wouldn't fit the bill in any circumstance).

Why do you say that many machines are conscious of their internal state? No machine has been proven to be conscious of its internal state, not even humans have been proven to be conscious. As I said before; it is only through testimony that we have the idea that humans are conscious.

Again, if you are talking about simple machines that we have manufactured, then this makes no sense, machines that we have manufactured hasn't been shown to have consciousness.

Consciousness itself often gives the definition of that which can't be defined otherwise. It often answers questions like "what is it like to exist", "what is it like to see?", "what is it like to feel joy?", and other such questions that probably doesn't have a good definition besides the conscious experience. So I agree that it's very hard to define consciousness, it may be impossible. Consciousness itself is a really good definition of it though (and might be the reason it exists, to define things that couldn't be defined using logic).

10. ### CarcanoValued Senior Member

Messages:
6,805
Gotta say...I'm very impressed with Yazata and Cyperium's posts!

Messages:
15
:d:d:d

12. ### Billy TPlease use Sugar Cane Alcohol FuelModerator

Messages:
20,156
Certainly better than just awareness, but still suffers from fact there is no well defined meaning for "conscious."
Perhaps, but that depends upon what you are meaning by not defined "consciousness." Certainly the thermostat is aware of the temperature of the room it is in and machines capable of taking actions don´t get simpler than a thermostat.

(I will speak more about this bold later.) Because, I was illustrating the difficulty of drawing a line between human consciousness (which we cannot define) and a machine that also "knows" what internal state it is in. There is essentially a continuum of complexity of internal states stretching from the simple thermostat (only two internal states) and humans (or far beyond human capacity in a narrow field.)

For example, assume air traffic control has been fully automated (to remove human errors, save money, whatever reason). That system is a "machine" aware of more than 1000 times more detailed information (in a narrow field) about its internal state than any human could be. (It "knows" the location of several hundred planes, their velocity (not just their speed), their altitude, what commands (for example: "Descend to 28,000 feet.") they are executing, their IDs and frequencies they use for communication, what type of plane and its characteristics (stall speed, remaining fuel or flight time etc.). All told that machine accurately "knows" >10,000 dynamically changing facts at the same time. No human, unaided by machines, can keep tract of 10 such facts. Several studies have shown humans can not keep more than 7 facts in their active (or "short term") memory.
Yes, I think we agree on this. I.e. While not possible to precisely define consciousness (in humans), you are also repeating my final sentence in post 48 effort, but instead of my general term, "qualia" you have illustrated three specific qualia.

I have read Thomas Negal´s book What is it like to be a bat? (or long article). I am quite sure they are conscious even though they have a tiny brain (a small fraction of an ounce). In fact, I had some slight professional association with a group at APL/JHU that had a large room full of the latest and greatest computer to process secrete sonar records of USSR subs. (It was the US Navy´s main research facility for trying to hull ID individual subs from sonar records. Very special clearance was required to even enter that room.) One of my friends in that group was, like me, very interested in biology, especially neural processing, and knew what amazing acoustical signal processing that tiny bat brain can do IN REAL TIME. He remarked to do the same in their facility would take more than 24 hour day of processing.

For example at >30 foot range, a bat can identify one particular insect (the more tasty type one presumably) in a swam as his next meal. Adjust his frequency modulated chirps for optimized tracking of it, correct for the two Doppler shifts (his own speed and velocity of the target) and change the interval between chirps to keep a steady echo stream (of delayed data) coming to the bat and correct for that dynamic changing delay, etc.

In this narrow field, the bat´s "consciousness" or "awareness of the internal state" of the converging bat (his self aware of own location speed chirp history, etc.) insect dynamic system, and corresponding delay corrected (a future projection) of the insect´s location and velocity, far exceeds anything humans, even assisted by millions of dollar of computers, can do in real time.

My point is: humans are complex and flexible, but I don´t see that "flexible" is necessarily part of being conscious. In limited areas other organisms are far more aware (conscious?) than humans are. The range of human qualia is quite different from say that of a bat, as Negal made clear in his book, but it seems very likely bats have their own set of qualia and certainly are aware of what their body is doing, where it is, what it was doing and where it was earlier (The chirp echo now being processed was emitted from a different location with different dynamic frequency slew, bounced off the insect when it was elsewhere, with different velocity, etc.) etc. Complexity alone as requirement leaves humans way down on the list of conscious creatures, if the versatility factor is ignored.

The bats may only be sophisticated automatons, but as you suggest (in now bold text above) perhaps that is all humans are too (but very flexible ones).

The difference between humans and bats (etc.) is mainly the human´s versatility, not in the internal complexity of internal states that they are aware of in a limited field.

This difference is sort of like the difference between a flexible programmed computer and a one purpose analogue computer. It can "instantly" solve the one problem it was designed for faster than the general purpose computer can, but that is all it can do.

Last edited: Apr 24, 2012
13. ### YazataValued Senior Member

Messages:
2,472
How could a robot respond to its environment without being aware of it?

What's 'subjective awareness'? That's not a rhetorical question. I think that we (meaning not only us personally, but philosophy in general) needs to be clearer about what we are talking about, before we can hope to address the problem of what generates it and what else might share it.

I guess that most basically, subjective awareness is something like first-person awareness. It revolves around a sense of 'me', of 'I'. It isn't just awareness of external events (heat, cold, food, predators, sex partners), it's something additional. Subjective awareness also includes an awareness that 'I'm aware of' all of those things. In other words, it seems to be kind of an internal self-referential loop, where there's not only awareness, but awareness of awareness.

We can't ever experience their awareness as our own. We can't even do that with the human beings around us. (It's philosophy's famous 'problem of other minds'.)

But we could probably get some idea of which animals are self-aware, and if so, what degree of self-awareness they might have, if we could get a better handle on their brains' data-processing powers and about what kind of information about their own internal states they are capable of processing.

We could probably tell something from observing their behavior about whether or not they have memories, about whether they are able to plan (and hence imagine) and about whether they are capable of critiquing their own behavior and learning from their mistakes.

Right. My own view is that consciousness is identical with causality down at the low end. You kick a stone and it rolls over (though not under its own power). It reacted and changed its physical state in response to what you did.

We typically think that way when we watch the behavior of protozoa, which flee if you drop a little acid onto a microscope slide. (That's a common introductory microbiology experiment.) Biologists' typical assumption is that they are little machines that causally function that way.

Then we can work our way up the animal kingdom's phylogenetic tree, observing the appearance of nerve cells and their organization into more and more complex structures. A sea urchin is still imagined as if it is kind of a machine, I guess. An earthworm might be as well. Insects are alert little devils, and it's more problematic what to think of them. Some of their behavior is highly complex, but it still seems to largely be hard-wired in.

A few of the lower phyla occasionally make good showings. Octopi are molluscs, but they are bright, the smartest invertebrates, with quite a bit of reasoning power. I wouldn't be surprised to learn that an octopus has a rudimentary non-verbal sense of its own self. By that I mean first-person, subjective awareness. There's probably some kind of little "me" inside that soft tentacled mass.

My guess is that most mammals have it to a greater or lesser degree. I'm sure that my dog does. And then there's those always-talkative human beings, who can put it all into words and think about it as an abstract problem.

But it's still all just causality, at its core. That's my view at any rate, which certainly isn't the last word on anything. This is still an area of very active philosophical and scientific speculation and debate.

14. ### Fraggle RockerModerator

Messages:
22,693
We already know that. One of the bullet-points in the more-or-less universal definition of "life" is response to stimuli. Of course this is an observational definition: If something doesn't respond to stimuli we may never get around to examining it more closely to see if it satisfies the rest of the definition (growth, metabolism, etc.)

But does response to stimuli equate to awareness? If you place your potted plant on a slanted surface so that it no longer points straight up, the new growth will point in the new "up" direction and even the old growth may ever-so-slowly try to right itself as it adjusts the nutrient supply to the existing tissue. Does this mean your Ficus benjamina has awareness?

The only kind of awareness that's been mentioned in this thread that I find promising in the study of consciousness is awareness of self.
Both as a linguist and a former future scientist, I call that a really crappy definition! "Awareness is the state of being aware?" Yeah right. I've seen fortune cookies with more wisdom than that!
Since we so far have not got a very good definition of the word "aware," anything goes. The earth's ice pack reacts to its environment, by growing during periods of glaciation and shrinking during periods of warming (such as we're experiencing right now). Does that mean it's aware?
Not to mention, since we don't have a good solid definition of the word "conscious" that would satisfy a rigorous scientist, we don't even know what we're looking for!
Each of us is quite certain that he is conscious. And since we all have demonstrably the same physiology, and apparently pretty much the same basic neural wiring, we assume that everybody else is conscious too.
The same way a tree does, only quite a bit faster.
For now we'll settle for taxonomy. All warm-blooded animals (birds and mammals) dream (according to brain waves). We regard dreaming as such an incredible experience that we're willing to grant every other animal who does it membership in our exclusive Consciousness Club.

Is that ironic? We call them "conscious" because of something that happens when they are unconscious!
I'm sure a biologist from another galaxy would say that we are simply bigger machines.
And we've recently taught sign language to two other species of Great Apes. One of them saw a zebra for the first time and said "Look, a white tiger." Another one taught it to her own baby.

15. ### CyperiumI'm always meValued Senior Member

Messages:
2,972
Sorry for the delay in my response. I've decided to answer to everything even though I'm going to repeat myself since all of you raise similar points.

You could say that being "aware" of something only includes having knowledge about something without any sense of self having the knowledge. But that doesn't fit with what most people associate "awareness" to. Instead most people (at least according to my own understanding) would intimitely associate awareness with a self being aware of something.

It could be the case that a simple thermostat has a self that recognise the knowledge of the temperature, which would probably mean that any knowledge about a system has a self that recognise that knowledge.

Maybe (since we are speculating anyway) "self" isn't unique to living beings, or even unique as a self, but gets to be unique simply by associating with the knowledge of a specific system (the system is unique, not the self). In this case our "self" is only a disintegrated part of a self that would probably belong to the entire existence (as a integrated system). If such a scenario is possible then that system would probably have knowledge of all sub-systems (the disintegrated selves) but we are cut off from it. It could even be the unavoidable consequence if it is shown that consciousness and awareness doesn't come about because of complexity but instead only relies on simple changes of a system (like a thermostat).

Yes, but that would imply that awareness isn't something that arises from complexity, but instead applies to everything in existence (or at least everything that can change - which should cover most of the universe). Could be that increased complexity - or should I say, increased integrated complexity results in increased awareness.

Yes, not to talk about the internet where the routers keep track of several terabits of data. I think the main point about consciousness and awareness would be that the information should be integrated. We can have a thousand thermostats measuring the temperature, but if they aren't integrated with eachother then they wouldn't form a system of awareness, but each thermostat would have its own awareness, this is also the case with humans - we have our own unique awareness because the information isn't fully integrated with other humans. Could the case of love and friendship be such a integration? I think it could. People in relationships often finish eachother's sentences, have a good knowledge about the mood of the other person and other such phenomena that could be a awareness that has formed between the two (but which is shared through two selves).

Yes, your term "qualia" is exactly what I was getting at too, so we're on the same page. That's one of the most difficult aspects of consciousness to explain, since we don't know how qualia is produced. If awareness and consciousness (which involves qualia) is depending on information, then what is it in that information that produces qualia? Is information itself qualia?

I don't know if I should agree on that. Humans do the same type of correlation when judging speed of incoming objects, we have blind people that are able to navigate using the echoes of the sounds they produce (click sounds). So even if bats are better at localising sounds in this specific scenario we could probably find a lot of structures in human that far beats those in a bat, including the sensitivity of the social structures in a society that are probably far more primitive in a bat and which is far more primitive in any other creature - even though they have been shown to have social structures as well.

True, we have to take into consideration though, that different "fields" work together and form a higher complexity. We can also localise sounds, which is then integrated into vision and balance and other structures which produces a sense of where we are in the room and what the room looks like and such. Everything we are aware of is integrated, and thus, even though a bat can beat us in a limited field, they probably wouldn't come close when all those structures are combined.

This isn't to say that bats aren't aware, but they probably aren't aware of as much as we are - and indeed, that could be the only difference.

Yes, I would agree on that.

In short, I think that we should consider integration more when it comes to awareness. But since we have many systems in our brains then your reasoning would suggest that we have many "selves" as well? Or do they blend into one self when the systems are integrated? What do you think?

Awareness often implies a subject being aware. The problem is; Is the robot a subject in its own right? Does the knowledge relate to that subject? Or is the robot simply a collection of parts where the knowledge of its surroundings only remains the knowledge, and not the knowledge of the subject. In the case of the robot we come to see that information or knowledge doesn't relate to the robot as it does to a human being aware of the same knowledge. At least we could make a robot that doesn't have that relation, where the knowledge simply triggers the mechanism to move, but isn't integrated to form the knowledge of why it moved and therefor doesn't belong to the robot but only belongs to the mechanism that makes it move.

Yes, there is a problem of definition, and as has been addressed before, consciousness is often its own definition. We know what it is, but we can't describe it. In fact, the description would probably have to be consciousness itself. There's just no way to make the subjective objective. At least I'm finding it very hard to imagine - cause the recognition of it would be consciousness itself and how could we look at that from the outside?

Yes, that's also the problem with the robot, it would probably not have a self, at least not as a robot, if any awareness arises in the robot, then it would be local to the information itself and not to the robot, since the robot wouldn't necessarily have to be constructed in such a way that the information is integrated with the robot, but only integrated to the system that moves its legs. In the view of awareness there would be no robot there.

Somewhere we would have to make that self-referential loop that you talked about earlier and the problem would be how to construct such a loop.

Even so, super computers have simulations of rat-brains down to the single synapses. Would that mean that the simulation of the rat-brain would be self-aware if the living rat was? Even though I'm finding that very hard to believe it wouldn't be impossible.

I agree that we should probably look at these similarities when judging if a animal is aware or not, but there's no way to actually confirm it.

Also, there's no evidence that such a self-referential loop is even needed. No such loop as been found, I would rather think that integration of information is what enables self-awareness, and that the self is simply a unavoidable consequence of such a integration, in this case even though we think that the self is some kind of identity, it's actually something inherent in information that we feel belong to us simply because the information is local to us (and integrated into our local system). We could, at least in theory, extend our awareness to information outside of us if we make it a part of the integration (which could actually be what the feeling of "one with the universe" actually is).

All which requires integration of information. Which - in my view - is what consciousness and awareness actually is.

Yes, I think we share that view. In my view, that change would count as information. In fact, I think that any change in a system would be counted as information about that system.

True. I think that we have two main directions that we could take, one is that consciousness and self-awareness is depending on a structure or some self-referential loop of some kind and one where consciousness and self-awareness is information and that the self is simply the integration of that information.

Perhaps it does, in some limited sense at least.

I think that awareness requires integration, so in that case there could be some sparks of awareness in every single change, but if the changes aren't integrated then they remain a local awareness in such a way that each part is aware of each change, but the entire system doesn't take on a self since they aren't integrated as information (one part doesn't know what happened to the other part).

True, since consciousness is its own definition and consciousness is only subjective (for all we know), then it remains a (in my view) impossible task to define consciousness in any satisfying way. I would think that consciousness arises through the integration of information, but what does that mean? Information itself is only sensible when it comes to consciousness, so what is information? It's enough for something to change for that to be the information about that change, so if information is awareness then each change is self-aware as a change. Couldn't that be a self-referential loop, it is what it is. The change is the information of the change and hence is self-aware?

I think we can all agree what consciousness is simply based on the fact that we are conscious. But that doesn't mean that we can tell anyone about it, nor describe it to anyone because it simply doesn't fit in words. Couldn't that be the one of the reasons for consciousness though? That we are able to describe things which doesn't have definitions only because we know that other people have experienced the same thing?

16. ### James RJust this guy, you know?Administrator

Messages:
26,781
What to girl when the get drunk?