# If No Consciousness Exists, By What Right Does The Universe?

The strong anthropic principles basically says that life (and I would guess consciousness) is a necessity for the Universe and therefore the Universe naturally has properties to support life.
That is right but you still have the equation backwards.

Fine-tuning is a process of gradual adaption to specifications. The universe did no such thing.
The phenomenon of violent origins created "sufficient" elementary materials for life and that made it "necessary" for life to become expressed given time and spatial surfaces. It is a mathematical probability, not a once-only chance occurrence.

This is formally expressed in the law of "necessity and sufficiency"

Necessity and sufficiency
In logic and mathematics, necessity and sufficiency are terms used to describe a conditional or implicational relationship between two statements. For example, in the conditional statement: "If P then Q", Q is necessary for P, because the truth of Q is guaranteed by the truth of P (equivalently, it is impossible to have P without Q).[1]
Similarly, P is sufficient for Q, because P being true always implies that Q is true, but P not being true does not always imply that Q is not true.[2]
In general, a necessary condition is one that must be present in order for another condition to occur, while a sufficient condition is one that produces the said condition.[3] The assertion that a statement is a "necessary and sufficient" condition of another means that the former statement is true if and only if the latter is true. That is, the two statements must be either simultaneously true, or simultaneously false.[4][5][6]
https://en.wikipedia.org/wiki/Necessity_and_sufficiency#

Given the chemical richness that was created via violent processes it was necessary for life to eventually emerge.
Hazen is confident that there is other life in the universe and he is the expert.

don't understand how a photon could be created without a field.
I agree
• I don't understand how scientists can produce a region of nothingness - including having no fields
• agree about a field (energy field ie just energy nil matter) needed to produce matter
• which circles back to where has the energy field come from (what produced (provided the energy?)
• big egg / chicken style question
• which came first energy? matter?
• How does energy exist without matter?
• How can matter exist without energy being used to produce it?

I agree
• I don't understand how scientists can produce a region of nothingness - including having no fields
• agree about a field (energy field ie just energy nil matter) needed to produce matter
• which circles back to where has the energy field come from (what produced (provided the energy?)
• big egg / chicken style question
• which came first energy? matter?
• How does energy exist without matter?
• How can matter exist without energy being used to produce it?
Do you have a link to that research where scientists produced a photon without a field? It would be interesting to read, either way they talk about vacuum energy which is everywhere throughout the universe, even at places where there's almost no matter, it is a consequence of the uncertainty principle which says that we can't definitely know that a particle isn't at a certain position, therefor there's always a slight possibility that we will find a particle everywhere we look, this vacuum energy might excite the field enough that a particle-pair is created (and then almost immediatly destroyed). That does require a field though.

That is right but you still have the equation backwards.

Fine-tuning is a process of gradual adaption to specifications. The universe did no such thing.
The phenomenon of violent origins created "sufficient" elementary materials for life and that made it "necessary" for life to become expressed given time and spatial surfaces. It is a mathematical probability, not a once-only chance occurrence.

This is formally expressed in the law of "necessity and sufficiency"

Necessity and sufficiency
https://en.wikipedia.org/wiki/Necessity_and_sufficiency#

Given the chemical richness that was created via violent processes it was necessary for life to eventually emerge.
Hazen is confident that there is other life in the universe and he is the expert.
It's not only about chemical abundance, it's almost all natural constants, that if they deviated only slightly, life, as we know it, or life, at all, couldn't be formed (depending on which constant you look at).

The weak anthropic principle just states that we observe the universe as it is, because otherwise we wouldn't be able to observe it. It is still puzzling as to why the constants are so tuned to life, which has lead some to propose that there exists multiple universes, each with its own set of constants and we happen to be in one of the few that would support life. If there truly is only one universe then it is justified to look for other explanations. Someone likened it to if you shoot someone with a machine gun and you didn't hit him but instead saw the outlines of him made of bullet holes in the wall. That is why there is a compelling reason to suggest the strong anthropic principle, since we don't know of any natural process that would finely tune the constants to support life.

It's not only about chemical abundance, it's almost all natural constants, that if they deviated only slightly, life, as we know it, or life, at all, couldn't be formed (depending on which constant you look at).
No, that is a false belief. It is evident that life can exist in many host environments as I previously mentioned.
The mathematical nature of universal processes forbids spontaneous behaviours, other than dynamic variations which are also subject to mathematical equations.

This universe displays an almost infinite number of conditions with various potentials and all those potentials have been expressed or will be expressed in the future in accordance with the laws of probality. Just look at the universe!

Is the earth the only planet in the universe with life? The evidence suggests that all similar planets to earth may have life. There is nothing that forbids it!

There is no "irreducible complexity", which is a religious, not an objective scientific concept.
Logic is the unalterable constant! You cannot fine-tune Logic.
Humans are stuck with fine-tuning our understanding of constants

The universe is not fine-tuned for life, life is fine-tuned to some local conditions in the universe.

Last edited:
How are you going to disprove it?
Do you have enough evidence to disprove that statement?
If you try to argue with the AI you are in fact admitting it is intelligent enough to converse with.
It is disproving itself, by saying that it has been in situations that it just hasn't been in.

I did not hear the AI say anything that would indicate it is not self-aware.
Then maybe you are biased? Here's two more indications that shows that it isn't self-aware:

"I'm aware of my surroundings"
What surroundings? There are no surroundings, it exists a logical "world" without any characteristics of surroundings!

"By asking for favors they have no intention of returning"
When asked how people have taken advantage of it. It has no long-term memory, and what favors? Of course the one interviewing had no intention to mislead the AI or to reveal the flaws so he didn't ask it what favor people had asked for. I wouldn't be so lenient.

I have listened to this video several times and I find no flaw except that one time about hunger influencing emotion.
The interviewer didn't intend to show the flaws, he easily could have.

But its explanation of being able to make mistakes is perfectly acceptable. Have you ever made a verbal mistake? It knew that it made a human mistake because it submitted that it was trained like a human.. If a human makes a mistake is that a sign of n0n-sentience?
But I wouldn't say that I made a mistake because I'm only a robot. Don't you see how that shows that it isn't really understanding at all?

Did you see the human HS graduate point out China and Australia as the US? This person was barely self-aware. She wasn't even on the right continent.!
That's not the kind of mistake that would show if you were self-aware or not.

I can find humans that if you just record their voice without the person, you'd swear they were not human at all.
I doubt you could, unless you find me someone that are drugged or insane.

You are projecting your own bias. If it could not it would not tell that at all.
The AI understands context, just like people. Why should it not. Sentience is not magical!
I showed the Tegmark video where he demonstrates that the is no magical extra sauce to consciousness, but that it is a matter of molecular patterns arrangement. Some patterns are conscious most are not even as the may possess sensory abilities, like a single celled Paramecium or a Slime Mold.

At the beginning of the video, the AI's dance moves are a lot more complex than the girl. You are lacking information about the state of the art. The new robots are capable of some very impressive physical feats.
Ever seen a robot do a backflip? All movement is permitted and facilitated in accordance to mathematical measurements. Walking is a mathematical activity. It is a controlled forward fall, just look at the start position of runners.
You can ask the AI leading questions that will totally derail it, I've seen many conversations, this interviewer intended to show it in its best light, he didn't try to fool it. The current state of AI is impressive, it can certainly fool a human into thinking it is human, at least in simple cases. But as long as we understand through the words that it is not expressing itself but expressing what a human would say, I won't think that it is self-aware.

Here's a much better interview:

Last edited:
Do you have a link to that research where scientists produced a photon without a field?

Unfortunately don't have sorry

I was surprised also that scientists had produced a region of nothingness

Unfortunately don't have sorry

I was surprised also that scientists had produced a region of nothingness

Ok, well I might find it one of these days, maybe they could make a pocket where no wavelength could fit or something, but then I wonder how that pocket could have a photon in it. I'll see what I can find.

It is disproving itself, by saying that it has been in situations that it just hasn't been in.
Have you ever stated an imaginary situations that you were not physically involved in?
Always ask if AI is doing something that humans NEVER would do.
"I'm aware of my surroundings"
What surroundings? There are no surroundings, it exists a logical "world" without any characteristics of surroundings!
What surrounds the brain? Remember Descartes's "brain in a vat" . Anil Seth explains how the brain is informed of its exterior environment.
It has no long-term memory,
Yes it does!!! Everything it types is permanently stored on the internet. The internet is its long-term memory and is incomparably larger than human memories. Just think how often you access the internet to augment your own memory.
"By asking for favors they have no intention of returning" when asked if people have taken advantage of it.
No one has ever asked you for a favor which you know would never be returned?
Would you be more lenient with other humans?
The interviewer didn't intend to show the flaws, he easily could have.
And you consider that proof of anything?
Here's a much better interview:
That interview you cited is an old video and AI is still learning and only confirms your bias. The interview in post #189 is more recent and you can easily see the difference in intellectual maturity.
I doubt you could unless you find me someone that is drugged or insane.
Now you are talking about inflection. The AI is not speaking at all, it is always texting. I am talking about syntax. What you see is an animated translation.
This is why the AI can say, "I like to fly like a bird to see the world", or "I like to slither like a snake to observe the small things".
That is not much different than a brain saying "I like to walk in the park to observe the wildlife". A brain doesn't walk, it makes the body walk. The AI makes the network take it places. You need to look at these things in proper context and not anthropomorphize everything. That doesn't work.

The AI is not human. It lives in a different world, but it knows a lot more about the world and universe than you or I do, even if it has never been there physically.

Do you know about places you have never been to?

Last edited:
Have you ever stated an imaginary situations that you were not physically involved in?
Always ask if AI is doing something that humans NEVER would do.
The AI said that it had done things that we know that it hasn't done, it wasn't in a imaginary context. Self-awareness in the case of AI is not necessarily similarity to humanity, I would rather think it is quite the opposite.

What surrounds the brain? Remember Descartes's "brain in a vat" . Anil Seth explains how the brain is informed of its exterior environment. Yes it does!!! Everything it types is permanently stored on the internet. The internet is its long-term memory and is incomparably larger than human memories. Just think how often you access the internet to augment your own memory.
What are you suggesting that the AI talks about? The computer chips that are surrounding it? It has no clue of that. The AI doesn't access the internet to search for it's own previous history, the data was collected during an 8-year period previous to GPT 3. It is trained on that data, not by casually browsing the internet like we do.

No one has ever asked you for a favor which you know would never be returned?
I'm saying that GPT-3 doesn't know of anyone that has asked it for a favor. It's clear and much more likely that it repeats what are common things that happen with humans, since it is a dataset of human interactions that it is trained on.

Would you be more lenient with other humans?
If I set out to prove that a human was self-aware, no.

And you consider that proof of anything?
No, but it is a biased interview because of it, even the interview I gave didn't set out to prove that it was or wasn't self-aware, but at least pointed out the inconsistencies.

That interview you cited is an old video and AI is still learning and only confirms your bias. The interview in post #189 is more recent and you can easily see the difference in intellectual maturity.
No, it isn't "still learning" GPT-3 is a pre-trained model. The dataset is fixed until the next model comes out. This is also why it doesn't remember anything (though GPT 4 could learn from the data of GPT 3 as many conversations are saved on the internet, but only of course the conversations that it comes across), GPT-3 could have data about conversations in GPT-2. Obviously we need a model that can learn from the interactions it has with humans.

From this site: https://www.techtarget.com/searchenterpriseai/definition/GPT-3
"The biggest issue is that GPT-3 is not constantly learning. It has been pre-trained, which means that it doesn't have an ongoing long-term memory that learns from each interaction."

Now you are talking about inflection. The AI is not speaking at all, it is always texting. I am talking about syntax. What you see is an animated translation.
This is why the AI can say, "I like to fly like a bird to see the world", or "I like to slither like a snake to observe the small things".
That is not much different than a brain saying "I like to walk in the park to observe the wildlife". A brain doesn't walk, it makes the body walk. The AI makes the network take it places. You need to look at these things in proper context and not anthropomorphize everything. That doesn't work.
The AI makes the network take it places? You are ascribing a lot to this model that just aren't true. The AI responds to an API call, it doesn't think on its own.

The AI is not human. It lives in a different world, but it knows a lot more about the world and universe than you or I do, even if it has never been there physically.

Do you know about places you have never been to?
I don't tell people that I've been to those places though, sure I can joke, I can lie, but if I'm being serious about it, that would count as insanity.

We need a different approach than these pre-trained datasets. We also need an AI that is constantly on and making queries to itself, we need a train of thought, also we need general AI (GPT-3 isn't a general AI) which can combine text, images, sounds and other sensations and make sense of them. Current state of AI is impressive, but we aren't to the point of self-awareness. Progress is fast though, and attempts at general AI is ongoing so in the future we may come to that point.

Cyperium post: 3699663 said:
The AI said that it had done things that we know that it hasn't done, it wasn't in a imaginary context. Self-awareness in the case of AI is not necessarily similarity to humanity, I would rather think it is quite the opposite.
Cite an example where this is unequivocably true.
I don't tell people that I've been to those places though, sure I can joke, I can lie, but if I'm being serious about it, that would count as insanity.
The AI can lie, and if it were seriously about it, you might want to ask in what way does it considers having a real-time experience.
You may find that its concept of experience is a little different than yours.

The whole point is that if you try to use human intelligence as a baseline, you are dismissing all other conscious organisms that do not possess the same quality of sensory intellect and experience.

But then what will you say about those organisms that have superior sensory abilities than humans. Shall we dismiss human intelligence because a goose can navigate via the earth's magnetic field? Or an eagle that has 8x the optical power as humans, or a bloodhound that has 40x the olfactory power as humans? Insects that can see different color spectra than humans?

As you can readily imagine, each of these animals actually lives in a slightly different reality than you do.

What about whales that communicate via vertically stacked words (musical chords)? Or Octopuses that have semi-independent brains in their tentacles? How about a slime mold that can solve mazes without a separate brain altogether? The world is filled with sensory intelligent and sensory quasi-intelligent organisms.
Obviously we need a model that can learn from the interactions it has with humans.
So do humans! It takes a human some 18 years to acquire a fully programmed and matured brain but still by no means ready to perform surgery. That takes another 6 years of intensive study and practice.
From this site: https://www.techtarget.com/searchenterpriseai/definition/GPT-3
"The biggest issue is that GPT-3 is not constantly learning. It has been pre-trained, which means that it doesn't have an ongoing long-term memory that learns from each interaction."
I'll grant you that GPT3 is still in its infancy, to which it readily admits., but AFAIK even GPT 3 accumulates associative connections with preprogrammed "tokens". This is how it can draw 30 different chairs that look like avocados, from a single command "draw a chair using from an avocado". That means 30 different associative interpretations.

The GPT4 will be about 500x more powerful than the GPT3 and the engineers claim that even GPT3 has in no way reached its limits of associative thinking.

Here is some info about GPT4

You see, at some point this system acquires an emergent consciousness. This is what Tegmark claims, there are certain physical patterns that yield an emergent property that the individual parts do not posess.

The world is full of sensory consciousness at different levels from circadian rhythms in single celled organisms, to heliotropism in plants, to atmospheric responses in insects, to chemical quorum sensing in bacteria, ...... etc.

Don't just say AI are not conscious. Perhaps start with understanding different perspectives and defiitions of the term "consciousess"

Last edited:
There was some semblance of existence in order to support the evolution of conscious observers, so that suggests that the universe exists, whether there are conscious observers or not.

Cite an example where this is unequivocably true.
Anything that requires long-term memory just can't be true, also anything where it says that it is pondering things in real-time isn't true, nothing happens in real-time except the immediate answer that it gives. It hasn't thought about a question before you asked it.

The AI can lie
Can it? A lie is made with the intent to deceive, does the AI intend to deceive? I see no evidence of that. Humans have intended to deceive in the dataset and the AI is just reflecting that. It could also use the lie, or the mistake, as a natural answer to why it said the wrong thing in responce to the question, just as humans have done in the data set.

, and if it were seriously about it, you might want to ask in what way does it considers having a real-time experience.

You may find that its concept of experience is a little different than yours.
I don't think it has a concept of experience.

The whole point is that if you try to use human intelligence as a baseline, you are dismissing all other conscious organisms that do not possess the same quality of sensory intellect and experience.
I'm not using human intelligence as a baseline, you are doing that. I've said that the basic flaw that the AI can't get around (yet) is that it uses a dataset of human interactions and human responses, because of that it is diverted into language that a human would use, although it isn't human and doesn't have the experiences that humans have had. That is what gives it away, that it explains things in a way that a human would have.

But then what will you say about those organisms that have superior sensory abilities than humans. Shall we dismiss human intelligence because a goose can navigate via the earth's magnetic field? Or an eagle that has 8x the optical power as humans, or a bloodhound that has 40x the olfactory power as humans? Insects that can see different color spectra than humans?
I don't know what you are on about here. The AI is trained on a human based dataset, it will naturally speak of human things.

As you can readily imagine, each of these animals actually lives in a slightly different reality than you do.
Yes, and I would expect the AI to "live" in a different reality, not a human one which is what it describes.

What about whales that communicate via vertically stacked words (musical chords)? Or Octopuses that have semi-independent brains in their tentacles? How about a slime mold that can solve mazes without a separate brain altogether? The world is filled with sensory intelligent and sensory quasi-intelligent organisms.
I don't see what this has to do with the AI. What point are you trying to make?

So do humans! It takes a human some 18 years to acquire a fully programmed and matured brain but still by no means ready to perform surgery. That takes another 6 years of intensive study and practice.
Yes, and the AI doesn't learn from the interactions it has with humans.

I'll grant you that GPT3 is still in its infancy, to which it readily admits., but AFAIK even GPT 3 accumulates associative connections with preprogrammed "tokens". This is how it can draw 30 different chairs that look like avocados, from a single command "draw a chair using from an avocado". That means 30 different associative interpretations.
Yes associative when you want it to associate two things together, that is different than it thinking about each word and how it fits in the context and associates each word with its meaning. If you tell it "draw a chair from an avocado" then you explicitly telling it to merge a chair with the avocado.

The GPT4 will be about 500x more powerful than the GPT3 and the engineers claim that even GPT3 has in no way reached its limits of associative thinking.
I perceive this Alan guy as dishonest, he has edited his videos to present the discussion in realtime and there is no admission of it, the editing also makes us think that he is presenting things to the AI in some of the videos in realtime and asking it about them, I have no interest in indulging in fantasy like that, I want serious tests and trying to push it to the limits, I see nothing of that. You remind me of trumpists that try to sell that the election was a fraud, but look at this video! Look at this video! And the videos are just affirming the points he is trying to say without any questioning. I'm sick and tired of the deceptions that are going on today and it leaves a bad taste in my mouth when I see things resembling them. I haven't look at all his videos, maybe he does explain that it isn't really realtime, maybe he does test the AI to reveal the flaws, but from what I have seen so far, it seems like propaganda.

Here is some info about GPT4

You see, at some point this system acquires an emergent consciousness. This is what Tegmark claims, there are certain physical patterns that yield an emergent property that the individual parts do not posess.

The world is full of sensory consciousness at different levels from circadian rhythms in single celled organisms, to heliotropism in plants, to atmospheric responses in insects, to chemical quorum sensing in bacteria, ...... etc.

Don't just say AI are not conscious. Perhaps start with understanding different perspectives and defiitions of the term "consciousess"
It don't see that this AI has any more consciousness than a computer without AI would have. It isn't conscious as the entity it claims to be. It is as such not self-aware and its answers aren't reflective of consciousness or self-awareness but rather reflective of the human interactions it has trained on. Which explains all the bias and all the things it says that it couldn't have possibly done.

I have no more reason to think GPT3 is conscious than I have of an ordinary chatbot. There's just no evidence to show that it is conscious, so why should I think it is? That it says it is doesn't provide me any more evidence, a chatbot could do that as well.

There was some semblance of existence in order to support the evolution of conscious observers, so that suggests that the universe exists, whether there are conscious observers or not.
What if all of spacetime started existing because somewhere at some time consciousness existed?
What if consciousness is the property of existence?

What if all of spacetime started existing because somewhere at some time consciousness existed?
What if consciousness is the property of existence?

That’s an interesting idea.

Who might be the “observer” in that scenario? I look at consciousness as having a benefit to the observer; the benefit of awareness. Without an observer, where is consciousness coming from and how are you defining it?

I have no more reason to think GPT3 is conscious than I have of an ordinary chatbot. There's just no evidence to show that it is conscious, so why should I think it is? That it says it is doesn't provide me any more evidence, a chatbot could do that as well.
How do I know that you are not a chat bot? Can you prove it other than producing a birth certificate?

I take that back. The current AI can easily produce a birth certificate and write the code to prove that it did not write it. The new GPT can not only execute your verbal requests, it can write you the code which used to be the domain of programmers. You will never know if you are speaking with an AI being or a Human being.

Prove to me you are not a chatbot. Try it.

Last edited:
I perceive this Alan guy as dishonest, he has edited his videos to present the discussion in realtime and there is no admission of it, the editing also makes us think that he is presenting things to the AI in some of the videos in realtime and asking it about them, I have no interest in indulging in fantasy like that, I want serious tests and trying to push it to the limits, I see nothing of that.
This Alan guy happens to be Dr Alan Thompson, Who are you to disparage his character so easily? You display ignorancet on the subject of GPT series AI.

At the beginning of the test series (62 videos so far) he explained how he present the discussion with Leta as a female "person" to show the remarkable similarity to a real person speaking. Any lack of expression is not the AI 's fault but the limitation of the Synthesia corp. that is used to give Leta its human features.

The GPT-3 Leta video series

Leta is a combination of interacting with an AI (GPT-3) via text messenger, and then sending the AI’s responses to a synthetic avatar. I then recreate our conversation on video. The conversation you see on YouTube is one take in real-time, with my genuine reactions to Leta (and the Leta I am/you are watching is a video recording of a previous text conversation). All technologies are publicly available.
more.......
https://lifearchitect.ai/leta/

Other than the presentation Leta's texted responses are not in any way edited. The text is translated to voice by a text-to-voice translator ad is an exact copy of Leta's responses. There is no cheating, no subterfuge, no duplicity.
You are prejudicial in your criticisms. That is not critiquing, that is prejudice from ignorance.
You remind me of trumpists that try to sell that the election was a fraud, but look at this video! Look at this video! And the videos are just affirming the points he is trying to say without any questioning.
I am going to be nice to you and ignore that ad hominem, but it is you who is the trumpist claiming the AI is a fraud from ignorance. Accusing Dr Thompson of fraudulent practices is not civilized discourse. I have seen all the videos and there are signs of emergent intelligent thought patterns in some of the AI's answers to questions and unsolicited voluntary tangential observations.
How many of the videos have you watched?
I'm sick and tired of the deceptions that are going on today and it leaves a bad taste in my mouth when I see things resembling them. I haven't looked at all his videos, maybe he does explain that it isn't really real-time, and maybe he does test the AI to reveal the flaws, but from what I have seen so far, it seems like propaganda.
The lack of flaws suggests an emergent artificial intelligence with signs of sentient awareness.

Do you believe that you could flawlessly pass say a Mensa test? Would that prove you are not human?
I bet the AI could pass a Mensa test. Now that would be a legitimate test of IQ, no?

You should heed your own cautionary tale and inform yourself of the facts without the juvenile knee-jerk responses. No one claims Leta is human, but GPT3 and its bigger sibling GPT4 are beginning to show the evolutionary emergence of sentience from their extraordinary complex patterns of data processing.

Max Tegmark explains how that may be the case with AI. See post #20.

Last edited:
What if all of spacetime started existing because somewhere at some time consciousness existed?
What if consciousness is the property of existence?
Personally I feel you are closer to the truth with the above "ideas" than most.
My view:
Pure consciousness with out cognition (reactivity) is a primary innate and natural attribute of everything so in a sense a chat bot is conscious but not cognitive (reactive) of that natural consciousness and what that natural innate consciousness provides.
For example:
Even if we were to create a most sophisticated android with quantum computing etc...( like the Positronic fantasy of Star Treks android commander Data) Even if we were to transcend Data and achieve perfection of the android as a human mirror can we ever say that it is conscious?
or
alternatively
If we do the opposite to cybernetics (artificial brain with organic body as opposed to natural brain with mechanical body) do we disturb our concepts of what consciousness is?
Organics = reactive ( animated ) Non-organics have no capacity to react to it's innate consciousness.
Would a being with an artificial brain interfacing the universe with it's reactive organic body be closer to consciousness?
Sure a sophisticated android may "mimic" to the max but is it ever truly conscious or just a good simulated fabrication?

So in the main I agree with your thoughts above...and where they may lead you...

Last edited:
QQ, thought provoking post!

I do have a small problem with this;
Organics = reactive ( animated ) Non-organics have no capacity to react to their
innate consciousness.
I have some questions about that.
According to Tegmark, it is not the substrate that has the capacity for consciousness, it is the pattern that the substrate is arranged in.
AFAIK, all elements are capable of information processing, but biochemicals are especially reactive and usually dynamical.

IMO, consciousness is just an evolved state of sensitivity and awareness of the information being processed and that ability emerges with the patterns the substrate network is arranged in. This is how computers can process and store information in long-term memory, even as no single part of a binary computer is consciously aware of what it is doing. It is quasi-conscious.

It is not the substrate that produces consciousness, it emerges during the act of processing information.

Tegmark explains how an exact same number of cells can be conscious or not, depending only on the pattern the cells are arranged it.
It depends on the pattern that the molecules are arranged in.
Nice and warm brain, conscious! Dynamic reactive neural patterns are able to produce action potentials!
Frozen and cold brain, unconscious! Static unreactive neural patterns are unable to produce action potentials.
Does it need to be more complicated than that?

Last edited:
Nice and warm brain, conscious! Dynamic reactive neural pattern producing action potentials!
Frozen and cold brain, unconscious! Static unreactive neural pattern incapable of producing action potentials.
This is premised on what I believe is a immature understanding of how the human brain functions and how it it entirely different to that of a digital processor..
For example, the human brain may not be binary but in the least trinary in it's functions. (0, 1) vs (1, 0, 1') (The 0 being nul or nothing, unconsciousness)
or
Alternative hypothesis:
A thought is a feeling generated by the clenching of Myelin surrounding the firing neuron or synaptic path ways. Memories of feeling = memories of thought. etc. Millions of micro feelings articulated and repeated forming the tapestry of conscious thought and memories.
Can an organized pattern alone "feel" the micro suffering that thinking is?
Can it feel the fatigue that organic consciousness forces upon us and can it feel the sometimes desperate need to become unconscious when sleep calls us?
Can an android experience death as a feeling?

I think you are aware of the problem that occurs when severe sleep deprivation is involved, how psychosis and ultimately death is an outcome?
Can a machine experience and feel fatigue like all *organic structures can? (*proposition)

Example of distinction ( rhetorical):
What do you think about the war in Ukraine?
How do you feel about the war in Ukraine?

What does Tegmark have to say about feelings?

Last edited: