If No Consciousness Exists, By What Right Does The Universe?

Lyapunov function
In the theory of ordinary differential equations (ODEs), Lyapunov functions are scalar functions that may be used to prove the stability of an equilibrium of an ODE. Named after the Russian mathematician Aleksandr Mikhailovich Lyapunov, Lyapunov functions (also called the Lyapunov’s second method for stability) are important to stability theory of dynamical systems and control theory. A similar concept appears in the theory of general state space Markov chains, usually under the name Foster–Lyapunov functions.
For certain classes of ODEs, the existence of Lyapunov functions is a necessary and sufficient condition for stability. Whereas there is no general technique for constructing Lyapunov functions for ODEs, in many specific cases the construction of Lyapunov functions is known. For instance, quadratic functions suffice for systems with one state; the solution of a particular linear matrix inequality provides Lyapunov functions for linear systems; and conservation laws can often be used to construct Lyapunov functions for physical systems.
https://en.wikipedia.org/wiki/Lyapunov_function
 
Last edited:
What is this function based on ; Write4U ?
Ordinary Differential Equations (ODE). Read the abstract.

IMO, these established scientific theories only confirm my proposition that spacetime has a self-ordering quasi intelligent (mathematical) aspect to it. Consciousness is not required when there is a reliable alternate non-conscious function available. Occam's razor.
 
Last edited:
So these equations are based on what exactly , observations or mathematical imagination ?
I have no clue.
This statement from the link caught my attention.
For certain classes of ODEs, the existence of Lyapunov functions is a necessary and sufficient condition for stability
The Law of necessity and sufficiency is one of my favorites natural laws of logic.
If something is declared to meet the required properties, I tend to accept it as having been proven.
 
I have no clue.
This statement from the link caught my attention.
The Law of necessity and sufficiency is one of my favorites natural laws of logic.
If something is declared to meet the required properties, I tend to accept it as having been proven.

To your 2nd last statement . What are these required properties ?
 
To your 2nd last statement . What are these required properties ?
Necessity and Sufficiency
In logic and mathematics, necessity and sufficiency are terms used to describe a conditional or implicational relationship between two statements. For example, in the conditional statement: "If P then Q", Q is necessary for P, because the truth of P is guaranteed by the truth of Q (equivalently, it is impossible to have P without Q).[1][2] Similarly, P is sufficient for Q, because P being true always implies that Q is true, but P not being true does not always imply that Q is not true.[3]
In general, a necessary condition is one that must be present in order for another condition to occur, while a sufficient condition is one that produces the said condition.[4] The assertion that a statement is a "necessary and sufficient" condition of another means that the former statement is true if and only if the latter is true.[5] That is, the two statements must be either simultaneously true, or simultaneously false.[6][7][8]
Life is a demonstrated truth, therefore the necessary conditions for life had to exist prior to Abiogenesis.
In ordinary English, "necessary" and "sufficient" indicate relations between conditions or states of affairs, not statements. For example, being a male is a necessary condition for being a brother, but it is not sufficient—while being a male sibling is a necessary and sufficient condition for being a brother.
https://en.wikipedia.org/wiki/Necessity_and_sufficiency#
 
Last edited:
This thread has the hallmarks of a kid closing his eyes and covering them with his hands and saying You can't see me - in reverse

:)
 
According to Solms all conscious activities occur at the surface area of the brain, the Cerebral Cortex;
https://en.wikipedia.org/wiki/Cerebral_cortex
It created the chairs as requested . Which means it understood the request.
No, it doesn't mean that it understood the request. If you have a image of a chair, you can make small changes (to certain pixels or color values) that to a human wouldn't even be noticed but GPT3 could think it was a pigeon instead. It simply doesn't understand what a chair is.


If the request had been design an avocado that looks like a chair, would the AI have come up with a different set of designs? Or would it understand the context in which the terms chair and shape of avocado are used in the request.
The brain works exactly like that, when it is presented a with a novel idea for it has two separate memories it will proceed in the exact same logical fashion as the AI. It makes a "best guess" from among several potential solutions. (Anil Seth). AFAIK, that is exactly how the text based GPT3 functions.
GPT3 generates output one token at a time which is this "best guess" that you are talking about. It doesn't consider complete solutions, only what token is best suited to come next. Only reason why it kind of works is that the database is so very huge (as you said, basically the entire internet to work out what token should come next). A token may not be a single word, it might be a couple of words together, but nowhere near what is required for an actual understanding. It is blind to was before and only considers the weights attached to what should come next.


IMO, unconscious understanding need not be a random process. In fact 1 + 1 = 2 is an unconscious deterministic logical operation which does not require a brain at all. It's an abstract mathematical function, always arriving at the correct answer, unless the input is incorrect.
"Input <--> Function <--> Output"........ and........"Garbage in <--> Garbage out".

This is why binary (purely mathematical) computers are so accurate but unable to improvise, while text (dictionary definition) based computers have much greater freedom of choice and select for what it believes is the correct next word in the algorithm (string). That is why the GPT3 can not only present a range of solutions, but also writes the exact algorithm for every solution. That is pretty remarkable, IMO. Not quite human but close to "insect"?
Maybe close to insect, but also completely different from a brain, so comparing it to another brain might be the wrong comparison to make. It's its own thing. We should compare one AI to another, not to brains.

The GPT3 developers claim that the limit of GPT3 evolution has not yet reached a ceiling, and if we compare the GPT3 neural net consisting of 175 billion switches, it is still a baby compared to the 125 trillion neural synapses of the brain.

125 trillion synapses
https://www.sciencedaily.com/releases/2010/11/101117121803.htm#

The current limit lies in packaging trillions of artificial neural synapses into a small package which IMO, is the true miracle of large scale pattern complexity into a small scale pattern density of the brain.

https://www.sciencedaily.com/releases/2010/11/101117121803.htm#

Continued....
I encourage you to look into the limits of GPT3, it is even questionable if it can be seen as general artificial intelligence.

Is GPT-3 OverHyped?
 
Forms or configurations in general rudimentarily fill an existential slot
Maybe that is consciousness? Maybe that is what it is to exist? I'm not saying that the forms or configurations are pondering about it.

, they lack secondary levels ABOUT themselves, that reflect on themselves, that conceive themselves.
Does consciousness require that? Or do pondering about oneself require that? I'm not talking about thoughts, or even the knowledge that one exists, I'm talking only about the qualia of existence at this level. That existence itself has qualia.

Any simple and common retentions of the past located "out there" are not being used for cognition, it's not information for either psychological or artificial computational purposes. Just as 99.999...% of matter arrangements in the universe are not biological or technological, they also lack organization for systemic storing, retrieval, creative manipulation of information.
Would they need any storage? The holographic theory of the universe states that it exists in a plane which is projected, that plane would be the storage medium. The event horizon of a black hole seem to store every event that has passed it (which is the solution to the information vanishing paradox). Everything is organized in the universe, either by gravity or by other forces. There is nothing that isn't organized in some way, we have very large structures between galaxies. Information is constantly being sent with every event happening and it is being sent through the vastness of space. Though through incredible time distances. That doesn't really matter if there really is some kind of thought process going on, it would just have a different time scale than we can comprehend.

The "manifestation" or experiential characteristic of consciousness would be some ontological property of matter that science can't cope with, IF it is not amenable to technical formulation and becoming a facet of physics. Just like interaction and structure in general, it really doesn't have anything to do with consciousness until the brain or some equivalent agency recruits that capacity to build complex sensory and thought exhibitions.
It would be what it is. That's also why science just doesn't seem capable of reaching into qualia. It can describe how sound is generated in the brain (perhaps?) but it just can't describe what it is like to hear sound and how it creates that qualia. The qualia seem to be fundamentally different from the process that produces it.

Memory and cognitive processes are required to verify that the manifestations associated with a brain are even present, which would include manipulating them to reciprocally reference each other in a circular manner. For instance, I "know" that my vision features images as content because an audible-like narrative appears in my thoughts that either directly or indirectly states just that (for another person it might vary). But it's different modes of manifestation playing back and forth with each other as if to say "Oh, you are there" and ping back "You are, also", all manipulated by the hidden substrate of neural operations.
I agree that thinking about visual images requires a brain, or even thinking about the conscious experience, you are also not fully conscious of what is in the image until you actually identify the part that you are a conscious of (you might mistake an apple for a pear until you actually identify it as an apple and only then does it fully become a apple in your view). That is, however, on a higher level than simply being aware of existence.
 
continued....

Purkinje Cells

220px-Gray706.png

Transverse section of a cerebellar folium. (Purkinje cell labeled at center top.)
https://en.wikipedia.org/wiki/Purkinje_cell

Allowing for extremely sophisticated self-referential brain processes.
As I understand it, this ability is still lacking in the GPT3, but with more neurons and artificial synapses, this intricate self-referential data processing ability should improve and the GPT3 should be able to make better guesses, perhaps even as efficient as an early hominid?

.....................................................................
Chimpanzee 2.8×1010 neurons plus ? synapses.


Orangutan 3.26×1010 neurons plus ? synapses.


Gorilla 3.34×1010 neurons plus ? synapses.


Human 8.6×1010 neurons plus ~1.5×1014 synapses for average adult


African elephant 2.57×1010 neurons plus ? synapses.

https://en.wikipedia.org/wiki/Purkinje_cell
Maybe, but we need a different approach, perhaps working alongside GPT-3 in order to accomplish that. It isn't only about the number of neurons, there's also a lot about structure, as it is with the brain, there are specialized structures that deals with different things.
 
GPT3 generates output one token at a time which is this "best guess" that you are talking about. It doesn't consider complete solutions, only what token is best suited to come next.
But it has an enormous range of best suited tokens to choose from, 1 "token" from memory at a time, each producing a best guess.
And that is very much how humans do it. "Shall I do this, or this, or this, or this? 1 "idea" from memory at a time, each one producing a best guess. A prediction. (Anil Seth)
Only reason why it kind of works is that the database is so very huge (as you said, basically the entire internet to work out what token should come next). A token may not be a single word, it might be a couple of words together, but nowhere near what is required for an actual understanding. It is blind to what was before and only considers the weights attached to what should come next.
But that is precisely how humans do it. The understanding emerges with consideration of different perspectives (Roger Antonsen). This is what allows us to view and imagine the back of a 3D object, as compared to Plato's 2D cave figures.

GettyImages-11410491702-07f03ccd0eb54d36a9b62212cd2f55a1.jpg

https://www.thoughtco.com/the-allegory-of-the-cave-120330
Maybe, but we need a different approach, perhaps working alongside GPT-3 in order to accomplish that. It isn't only about the number of neurons, there's also a lot about structure, as it is with the brain, there are specialized structures that deals with different things.
Oh yesss, I agree.
Tegmark proposes that consciousness is an emergent quality of specific patterns. IOW it is the self-referential processing pattern that acquires a self-aware consciousness. Can a GPT3 achieve that? If not, why not?

The developers have indicated that they have not yet reached a ceiling (a limit) on the basic design. The problem is memory. This is where the human brain is so extraordinary.
In particular, the cerebral cortex -- a thin layer of tissue on the brain's surface -- is a thicket of prolifically branching neurons. "In a human, there are more than 125 trillion synapses just in the cerebral cortex alone," said Smith. That's roughly equal to the number of stars in 1,500 Milky Way galaxies, he noted. Nov 17, 2010
https://www.sciencedaily.com/releases/2010/11/101117121803.htm#

Can this staggeringly large number be achieved artificially, without resorting to nano-scale biology?

Working alongside a GPT3 might offer a unique insight into an emergent consciousness along with greater complexity or processing algorithms. It could keep a precise record at what point it gains "insight" in a specific context. The only question is if we can trust the AI to tell the truth.

If you ask a GPT3 if it is conscious, it will answer affirmatively (see video interview). You see the paradox this presents? What are you going to do? Make it swear an oath, to tell the truth, the whole truth, and nothing but the truth, and it does?
What makes you believe me if I tell you I am conscious? What is different, that you would believe me, but not a GPT3?
 
Last edited:
Back
Top