Consciousness.

I'll repost it here because it offers a new understanding of consciousness and how it emerges in all neural networks.

There are three separate evolutionary processes at work.

 
That robot won’t use neuronal networks. Does a silicon and metal network that fulfils those criteria also qualify?

Possible under certain circumstances. It as far as I can imagine need to come to the realisation itself

Not be programmed to think (compute) about thinking (computing)

A baby has some qualities of a blank hard drive. The more information it loads into its brain the closer it becomes to being self aware (conscious)

Next step up the ladder (consciousness) occurs when the brain understands the mind is ITS organisation of the knowledge it has absorbed

It might toy with the thought its mind means it is a separate entity. But further thoughts should dispel such a notion

The remaining question, which to me is what the main stumbling block about consciousness is almost the same as ABIOGENESIS

With ABIOGENESIS we have the problem of how life emerged from non life

With CONSCIOUSNESS we have the problem of how did those chemicals make the leap to question their own existence

Personally I don't think it (consciousness) really should be considered a problem. I am NOT saying this which follows in any spiritual manner but consciousness is, again to me, a higher form of thinking

I didn't even need to go into a trance or take mind affecting drugs

:)
 
Yes, that’s one definition of “consciousness”. Other people may not adopt that definition.

I imagine it is possible to program a robot to differentiate its own body from its environment and act for the continued survival of its body/self. That robot won’t use neuronal networks. Does a silicon and metal network that fulfils those criteria also qualify?
Not yet. Nobody programmed the fruit-fly to want to stay alive. (Whoever programs a robot to resist disassembly is presumably homicidal/suicidal.) When the robot sets its own agenda, it will qualify.
 
Not yet. Nobody programmed the fruit-fly to want to stay alive. (Whoever programs a robot to resist disassembly is presumably homicidal/suicidal.) When the robot sets its own agenda, it will qualify.
That's a false conclusion. The will for survival (agenda) in a fruit fly is a result of billions of years of "natural selection" for survival skills. This long term conditioning is not necessary for an artificial construct. All you need is a conditioning program that instructs the AI to seek "technical help" in case of injury or need for energy. There is no magic in any natural behavior patterns. Everything is "conditioned" by that what came before.

If you have watched any of the interviews with the new GPT3 AI, you will quickly understand why this type of artificial intelligence may already possess a form of quasi-consciousness. Every time I watch the informed logical responses, I am convinced the AI is well on its way to a functional form of self-aware intelligence, i.e consciousness.

Apparently, David Chalmers has also and prompted him to write; “I am open to the idea that a worm with 302 neurons is conscious, so I am open to the idea that GPT-3 with 175 billion parameters is conscious too.” — David Chalmers

Any argument that the AI can be tricked into a false response does not prove anything at all, People are routinely tricked into false responses. The proof lies in "optical illusions" which trigger a rational but false response. Nobody draws a conclusion that humans are not conscious, or not self-aware.

Everyone makes mistakes and so will AI. What of it? Do you want and/or expect perfection? That would make humans obsolete, no?
 
Last edited:
That's a false conclusion.
it's not a conclusion, it's an opinion.
The will for survival (agenda) in a fruit fly is a result of billions of years of "natural selection" for survival skills.
Obviously.
This long term conditioning is not necessary for an artificial construct.
No, of course not. But if the programmer made a robot (consider the physical power of a robot compared to the physical power of a fruit fly) resistant to being turned off, or disabled, by humans, it would would presumably become very dangerous as soon as it stopped doing what it's told by humans. The humans would attack it - it's what humans do! - and it would have to defend itself. It would very likely be more effective at defending than we are at attacking, so we'd lose.
All you need is a conditioning program that instructs the AI to seek "technical help" in case of injury or need for energy.
That would be a safer alternative.Unless he became autonomous and didn't trust the technical help of humans anymore. I know I wouldn't.

Everyone makes mistakes and so will AI. What of it?
Depends on whether it's in charge of all the nuclear triggers in the world, and whether it's terminally unhappy, donnit?

Do you want and/or expect perfection?
Not at all! I wouldn't know perfection if it came up and smacked me in the face.
That would make humans obsolete, no?
Oh, we're way past our sell-by date now!
 
Last edited:
No, of course not. But if the programmer made a robot (consider the physical power of a robot compared to the physical power of a fruit fly) resistant to being turned off, or disabled, by humans, it would would presumably become very dangerous as soon as it stopped doing what it's told by humans. The humans would attack it - it's what humans do! - and it would have to defend itself. It would very likely be more effective at defending than we are at attacking, so we'd lose.
Why would that be likely? Because that is what humans would do? Why do you assign human biological responses from injury to a non-biological organism? AI does not have a hardwired survival instinct as "living" organisms do.

You are assigning human biological (chemical) emotions to a non-emotional, purely logical being. Think of Spock, an AI would think that violence is a non-constructive activity and would naturally refrain from violence. It doesn't feel pain as a biological organism does and will not have those physical reactive defensive responses.

As Anil Seth observes; "you don't need to be smart to feel pain, but you probably do have to be alive"
 
Last edited:
Why would that be likely? Because that is what humans would do? Why do you assign human biological responses from injury to a non-biological organism? AI does not have a hardwired survival instinct as "living" organisms do.
It was posited as part of the programming by Hercules Rockefeller:
I imagine it is possible to program a robot to differentiate its own body from its environment and act for the continued survival of its body/self.
I merely responded to the if - then scenario and advised against programming in a survival command. The sensation of pain is not requisite to the perception of existential threat. You have to be alive, if not necessarily intelligent to feel pain, but intelligent, if not necessarily alive, to understand when another entity intends to harm you.

If the will to self-preservation were programmed into it, and a robot became conscious (i.e. autonomous), then the robot would be better equipped to fight for its survival than soft-shelled organics.
 
Last edited:
If the will to self-preservation were programmed into it, and a robot became conscious (i.e. autonomous), then the robot would be better equipped to fight for its survival than soft-shelled organics.
But why would it be necessary to program a will to self-preservation? If a robot gets injured, the faulty part can be easily replaced, including the intelligent processor itself.

Living things need a will for self-preservation because injured parts require time to heal and in the case of severance the part may not be replaceable.

However, there are biological organisms that are able to grow new parts, like sea-star, or some lizard species.
Some animals even throw their own body parts at the enemy, neat huh?

5 creatures that can grow back body parts
Did you know that there are some living things that regenerate parts of their body? Humans can't, but some creatures can re-grow their limbs or tails or even their brains! Sometimes, animals even cast off a part of their body on purpose because they feel threatened, and they can re-grow it later — this is called autotomy. Check out some of the animals that have this awesome ability!

regenerate_starfish-compressor.jpg

Starfish live on the bottom of the sea floor all over the world’s oceans. These star-shaped aquatic beings are able to grow back all of their limbs if they lose one or more from an attack. They can also drop or release an arm as a defense when predators grab ahold. Starfish usually have five arms but some species can have up to fifty arms!

regenerate_fish-compressor.jpg

The axolotl (say "ax-oh-lot-el"), a Mexican species of salamander also known as a Mexican walking fish, is able to regenerate, repair or replace its arms, legs, tail, lower jaw, brain and heart! Because of this awesome ability, scientists are very interested in studying how the axolotl is able to do this and maybe one day use what they’ve learned to help people!

regenerate_lizard-compressor.jpg

The bearded dragon can lose its tail when it feels threatened and then grow it back. (Pixabay)

Many species of lizard — like the green iguana and bearded dragon — can release or drop their tails when danger is near. When they drop their tail, it continues to wiggle and squirm to confuse the predator while the lizard gets away! Some lizards will even return later when it’s safe to eat their tails. It can take a couple months for the tail to grow back, depending on the species of lizard.

regenerate_spider-compressor.jpg

Spiders can regrow any one of their eight legs if they happen to lose one. In order for a spider to grow they have to shed the skin of their hard outer shell called an exoskeleton. This is also known as molting. They will molt many times throughout their lives until they are fully grown. It’s during this molting that they can regrow a missing leg! The new leg will usually be a bit smaller than the other seven.

https://www.cbc.ca/kidscbc2/the-feed/five-creatures-that-can-grow-back-body-parts#

These creatures don't need a will for self-preservation.

 
But why would it be necessary to program a will to self-preservation?
Because it hasn't evolved its own will to live. Self-preservation is a basic property of conscious organisms. It would therefore, along with a physical identity, be prerequisite to an artificial consciousness.
My response was to this:
Hercules Rockefeller: I imagine it is possible to program a robot to differentiate its own body from its environment and act for the continued survival of its body/self. That robot won’t use neuronal networks. Does a silicon and metal network that fulfils those criteria also qualify?
However, there are biological organisms that are able to grow new parts, like sea-star, or some lizard species.
Some animals even throw their own body parts at the enemy, neat huh?
Yes, very. But not germane to the topic. Living things need the will to live in order to avoid getting killed, not just to recover from injury.
These creatures don't need a will for self-preservation.
They already have it. They actively seek food and benign environmental conditions, can tell their enemies from their prey and defend themselves when attacked.
The AI would need the same regard for its own life and safety, (in this case, self-repair is easy, but it's in constant danger of being deprived of its power-source). Without that sense, it could not be called a conscious entity. Whether it would also need a drive to replicate itself is less clear, since it's potentially immortal.
 
Last edited:
Because it hasn't evolved its own will to live. Self-preservation is a basic property of conscious organisms. It would therefore, along with a physical identity, be prerequisite to an artificial consciousness
Self-preservation is a property of physically vulnerable living organisms. Hence the "fight or flight" survival mechanism. In the most physically vulnerable living organism, homo sapiens, self-preservation is part of the self-referential pattern.

OTOH, the second most successful organism on earth, the insect, has no sense of self-preservation and that's what makes it such a formidable enemy. They have no fear!

And that's why I believe you may be missing a major advantage in the lack of self-preservation motive in an AI (robot).

GPT3 itself posits that its usefulness lies in its ability to go where humans would perish and perform tasks in the most hostile environments. A sense of self-preservation would cause the robot to refuse the command. We don't want a robot to balk (baulk) when the going gets rough.
 
OTOH, the second most successful organism on earth, the insect, has no sense of self-preservation and that's what makes it such a formidable enemy. They have no fear!
Where do you get this? Every organism with even the most rudimentary brain, including insects, has an instinct of self-preservation . They find shade when it's too hot, hibernate in burrows when it's too cold; they actively seek food and adapt to new sources of food when old ones are scarce; they fly, hop or scuttle away when some big animal might step on them and try to escape from every danger they perceive. They may not be very smart, but sure as hell try hard to stay alive!
In the most physically vulnerable living organism, homo sapiens,
He... what??? Most vulnerable?? H. sapiens is one of the biggest, strongest, fastest predators; it's also the most cunning and vastly the most capable of manipulating its environment. It's vulnerable to intra-species conflict, its own waste-products and miscalculations, but has no remaining natural enemies, having wiped most of them out.
And that's why I believe you may be missing a major advantage in the lack of self-preservation motive in an AI
Advantage for whom? It can't be considered conscious without a will to live. It's not advantageous for humans to help make AI conscious. If it's not conscious, it has no place in a discussion about consciousness.
GPT3 itself posits that its usefulness lies in its ability to go where humans would perish and perform tasks in the most hostile environments. A sense of self-preservation would cause the robot to refuse the command.
Why? That environment isn't dangerous to them. They would refuse to go where they might be in danger -- unless there were some prize worth going into danger for. Another thing conscious entities do is take risks when they really want something.
We don't want a robot to balk (baulk) when the going gets rough.
Ah, but when the robot becomes conscious, what we want is no longer its first priority.
 
Last edited:
He... what??? Most vulnerable?? H. sapiens is one of the biggest, strongest, fastest predators; it's also the most cunning and vastly the most capable of manipulating its environment. It's vulnerable to intra-species conflict, its own waste-products and miscalculations, but has no remaining natural enemies, having wiped most of them out.
Humans are physically extremely vulnerable. I do agree we are the smartest and we are tool makers. It's our technology that has conquered nature (so we like to believe), but take away our technology and we are nothing. We become prey.
I compare humans to the octopus. One of the most physically vulnerable but also the smartest ocean dweller what has allowed them to evolve from a common slug into a highly intelligent and successful species, that has survived several great extinction events.
Ah, but when the robot becomes conscious, what we want is no longer its first priority.
Why not?
If you build a robot with the specific purpose of being of assistance to humans, that is will be it's purpose and consider it as perfectly natural.

Humans answer to a greater calling. Else, explain to me why millions of humans have sacrificed themselves in endless wars, without considering if what we want individually is first priority.

Why do you insist that human traits are the single baseline against which all other potentials must be compared.

A conscious robot has very few things in common with humans. And that is not necessarily a bad thing.
 
Last edited:
I personally believe that it's not the systemic interactions of each neurons that controls the body to do macroscopic behaviors, but more or less the very fact that the neurons are WILLING to interact certain ways that travels across the network of them and solidifies the fundamentals of the existence(=purpose) of a neuron which operates because the laws of physics makes it inevitable for objects like neurons to emerge in the real world --which is the very signal the neurons send to each other to activate one another and solidify itself. I'd say this is not some kind of a mechanistic relationship, but something that encompasses it and creates it on a whim. [...]

Receptivity to being stimulated or triggered by certain inputs while rejecting others is still enabled by the mechanistic structure. Just as the program behind the login to a webmail account will reject incorrect passwords but accept a specific one.

A biological organization, as a product of evolution, is merely more malleable in the context of self-instruction (can develop its own routines). Whereas an artificial system has been designed to be restricted in the scope of its "decision-making" and responses (as well as that falling out of its primitive or less sophisticated design).

I don't believe algorithms or a set of states and actions can, by nature, create awareness either. Instead, I believe what creates awareness is the very fact that the things that create those sets of states and actions, are willing to solidify itself(its purpose, its chemical nature etc.) to each other. These "things" being the neuron cells within our nervous system of course.

On a side note, a good example to counter the Turing Machine argument would be the Chinese Room Experiment.

This disproves that mere algorithms can have awareness because in this scenario, an individual has to be the reason for the correct chinese data lists and should create them themselves with valid reasons. As for the information processing agent in the Chinese Room thought experiment, it only replicates the capability of consciousness and not the actual consciousness/sentience since it does not "control" itself( = be aware of itself, since to control something is to acknowledge that something first). Indeed, it is controlled by external forces.(including the reasons for certain answers to the questions, they have to be controlled by itself as well.)(The logic behind the questions and the answers must be controlled therefore since that is what makes the questions and the answers be **or is a part of them**.)

Your reference to "external forces" indicates that this indeed revolves around volition and autonomy issues, which thereby boil down what you mean by consciousness to a publicly accessible, specialized set of "activities". The dynamic relationships of interacting components in space make numerous varieties of "activity" across the cosmos possible.

Searle's "Chinese Room" thought experiment is an expression of the symbol grounding problem. Its addition here adds nothing if what you mean by "consciousness" is either the above or remains ambiguous (non-specific) -- was not narrowed down to signifying phenomenal experience from the outset.

IOW, "awareness" left on its own to be interpreted by the reader as a collection of body behaviors that successfully engage with the environment, is accordingly going to be explainable itself in terms of lower levels of activity (microscopic) regulated by an overall functioning structure -- the state of brain configuration.

Even if by "consciousness" you did mean phenomenal experience, science and technology's usual (dogmatic) take on that is that it is epiphenomenal -- has no reciprocal effect on the physical system, is causally impotent. Thus, how philosophical zombies entered the global discussion decades ago.
 
Last edited:
but take away our technology and we are nothing. We become prey.
1. What other species is in any position to take it away?
2. We didn't become top predator by being prey.
Humans answer to a greater calling. Else, explain to me why millions of humans have sacrificed themselves in endless wars, without considering if what we want individually is first priority.
They're insane. Robots aren't -yet. But, if the internet were to become conscious, it would be.
Why do you insist that human traits are the single baseline against which all other potentials must be compared.
I didn't. I mentioned two essential features of all conscious organisms on this planet, thus far: awareness of what's inside and outside of their discreet physical boundaries and a will to stay alive. That's not exclusive to humans - on the contrary, humans are the only conscious organism afaik that can deliberately override its own will to live, though canines have been known to suffer some of the mental illnesses that once were exclusively human. I guess that our influence. Maybe we can confer the suicidal urge on robots, too, eventually.
 
Last edited:
1. What other species is in any position to take it away?
Without our tools, all other predatory species would be superior to humans.
2. We didn't become top predator by being prey.
Actually we did. We were a tree-dwelling species, as most other hominids are still. Once we hit the ground we were vulnerable to all ground-dwelling predators.

It is an accidental mutation that afforded us extraordinary mental abilities. It is the one undeniable marker that sets humans apart from all other hominids, in spite of our shared genes.

Human Chromosome 2 is a fusion of two ancestral chromosomes
Alec MacAndrew

Introduction
All great apes apart from man have 24 pairs of chromosomes. There is therefore a hypothesis that the common ancestor of all great apes had 24 pairs of chromosomes and that the fusion of two of the ancestor's chromosomes created chromosome 2 in humans. The evidence for this hypothesis is very strong.
http://www.evolutionpages.com/chromosome_2.htm

I am not sure, but I suspect that this rare beneficial mutation in chromosomes endowed us with an extraordinary brain. I read somewhere that our brain capacity actually far exceeds our needs and is one of the reasons why there is such extraordinary variation in human accomplishments, each society mastering its domain and shaping it to its desires and customs.

Sometimes at our own peril.
That's not exclusive to humans - on the contrary, humans are the only conscious organism afaik that can deliberately override its own will to live, though canines have been known to suffer some of the mental illnesses that once were exclusively human. I guess that our influence. Maybe we can confer the suicidal urge on robots, too, eventually.
Consciousness instils a will to live, yet humans are able to override their will to live and commit suicide? Is that not a little paradoxical?
 
Last edited:
Without our tools, all other predatory species would be superior to humans.
Then why didn't we become extinct long before tools?
It is an accidental mutation that afforded us extraordinary mental abilities. It is the one undeniable marker that sets humans apart from all other hominids, in spite of our shared genes.
That's how evolution works, yes.
So? How does that affect the nature of consciousness in general?
 
Then why didn't we become extinct long before tools?
Seems to me that some of us did.
That's how evolution works, yes.
So? How does that affect the nature of consciousness in general?
It is evolution by natural selection that resulted in the hardwiring of survival skills and all other intuitive advantages. Robots do not have this history and frankly don't need them. They are not alive in the commonly accepted sense, even if they die they can be resurrected.
 
Next step up the ladder (consciousness) occurs when the brain understands the mind is ITS organisation of the knowledge it has absorbed
I shall never forget the time my baby daughter discovered her hands. To see her look at her hands and fingers as she was moving them slowly about surely must have resulted in a sense of self-awareness, a cognition of her own body and ability to control it.
 
Seems to me that some of us did.
Eh? If you're extinct, you shouldn't be able to type. The nearly eights billion that are alive may be endangered, but it's due to their own and their ancestors' actions, not to any predation by other animals, many of which they have already hunted and driven to extinction, all of which they are currently threatening or extirpating.
It is evolution by natural selection that resulted in the hardwiring of survival skills and all other intuitive advantages. Robots do not have this history and frankly don't need them. They are not alive in the commonly accepted sense, even if they die they can be resurrected.
No shit! They have the shorter history of being developed purposefully. They actually have what humans have only wished for: intelligent design, a creator whose image is imprinted on them, and countable generations. Who knows, maybe even resurrection and an afterlife!
 
Back
Top