Consciousness.

I think it is against the rules to simply divert members to a different website.

Was it really too difficult for you to copy and past your idea here?:rolleyes:
 
I don't like copy and paste either.Why not just tell us what is on your mind .(or is the OP's mind static?)
 
More to the point: if it is too long to post, then it is too long to be an introduction here in one whopping post.
Pare it down so that it is a comprehensible intro.

Generally, we'll to analyze it in bite-sized chunks. If paragraph two requires us to first grant paragraph one - and we don't grant paragraph one - then there's little point in posting paragraph two. See?
 
So I recently found out that one of my links from my previous post don't work anymore.

Anyway, what are your thoughts on how consciousness is created?

To summarize myself on my own previous post, I think of it as something that *is* the neurons' very attempts to try and be as "active" as possible among each other.

It baffles me that humanity still hasn't figured out what makes them be still to this day, and I guess it's one of the reasons why I'm so interested in finding out more about the subject.
 
To summarize myself on my own previous post, I think of it as something that *is* the neurons' very attempts to try and be as "active" as possible among each other.

Neurons are a feature of all nearly all animals, vertebrate and invertebrate. Neuronal function has been extensively researched and characterized in the various popular model organisms: rodents, zebrafish, Drosophila, C. elegans, and many more. Why do you think H. sapiens neurons “try to be more active” than, say, Drosophila fruit fly neurons? Neurons are always active. If neuronal activity is the basis for consciousness then why isn't a fruit fly conscious?
 
Neurons are a feature of all nearly all animals, vertebrate and invertebrate. Neuronal function has been extensively researched and characterized in the various popular model organisms: rodents, zebrafish, Drosophila, C. elegans, and many more. Why do you think H. sapiens neurons “try to be more active” than, say, Drosophila fruit fly neurons? Neurons are always active. If neuronal activity is the basis for consciousness then why isn't a fruit fly conscious?

I honestly don't know, but if I were to go out on a limb and guess, it's not so much about how active these neurons are that makes some lifeforms more conscious than others. Instead, my guess is that some lifeforms' brains provide for more opportunities to take in informations of neurons being "active", through having more other neurons take in the information and so on which makes the whole network of activeness much more strengthened (since the neurons can solidify their presence or existence more), as well as the structure of the neurons being made to amplify its individual effects on the structure even more. (Having more ways to communicate information a.k.a. its activeness.)
 
Neurons are a feature of all nearly all animals, vertebrate and invertebrate. Neuronal function has been extensively researched and characterized in the various popular model organisms: rodents, zebrafish, Drosophila, C. elegans, and many more. Why do you think H. sapiens neurons “try to be more active” than, say, Drosophila fruit fly neurons? Neurons are always active. If neuronal activity is the basis for consciousness then why isn't a fruit fly conscious?
What makes you say they are not?
Consciousness is not a yes or no question. Conscious awareness emerges at evolved levels of sensory abilities and processing complexity in neural networks.

Bacteria can communicate via quorum sensing.
A single-celled Paramecium can learn to avoid obstacles
A Venus Fly Trap can tell if the foreign body that landed on her is a lifeless object or a living morsel.
A Slime Mold can solve a maze and has memory of time.
A Mayfly can detect the pheromones of a potential mate up to 10 miles if the wind is favorable.
A Fruit Fly has a preference for ultra-violet light.
When given a choice, fruit flies will head toward ultraviolet (UV) light rather than green light.
https://sitn.hms.harvard.edu/flash/2018/flys-favorite-color/

Do Insects Have Consciousness and Ego?
The brains of insects are similar to a structure in human brains, which could show a rudimentary form of consciousness.
Most of us think of insects as little automatons, living creatures driven by instinct and outside stimulus to slurp up nectar or buzz around our ears. But in a new study, published in Proceedings of the National Academy of Sciences, researchers suggest that insects have the capacity “for the most basic aspect of consciousness: subjective experience.”
The authors of the paper, philosopher Colin Klein and cognitive scientist Andrew Barron of Australia’s Macquarie University, aren't arguing that insects have deep thoughts and desires, like “I want to be the fastest wasp in my nest” or “Yum, this pear nectar is good!” But they do suggest that invertebrates could be motivated by subjective experience, which is the very beginning of consciousness.
“When you and I are hungry, we don't just move towards food; our hunger also has a particular feeling associated with it,” Klein tells
Insects also have a rudimentary sense of ego, though very different from Narcissus or Kanye. Instead, it’s the ability to act on certain environmental cues and ignore others. “They don’t pay attention to all sensory input equally,” Barron tells Viegas. “The insect selectively pays attention to what is most relevant to it at the moment, hence (it is) egocentric.”
Recent research mapping insect brains shows that their central nervous system probably performs the same function that the midbrain does in larger animals. “That is strong reason to think that insects and other invertebrates are conscious. Their experience of the world is not as rich or as detailed as our experience—our big neocortex adds something to life,” Klein and Barron write. “But it still feels like something to be a bee.”
....more.

https://www.smithsonianmag.com/smart-news/do-insects-have-consciousness-ego-180958824/#
 
So I recently found out that one of my links from my previous post don't work anymore.

Anyway, what are your thoughts on how consciousness is created?

To summarize myself on my own previous post, I think of it as something that *is* the neurons' very attempts to try and be as "active" as possible among each other.

It baffles me that humanity still hasn't figured out what makes them be still to this day, and I guess it's one of the reasons why I'm so interested in finding out more about the subject.

Some people believe that consciousness, self-awareness is something that can "emerge" from an algorithm. That is to say some algorithms are not conscious but some others are and the differences in the algorithms is the key to understanding what consciousness is.

I do not share this specious belief, an algorithm (aka a "Turing machine") is an algorithm and all it has are states and actions, I do not see how one set of states and actions can be conscious whereas some other set of states and actions is not.

This is a mistaken belief based on a poor understanding of computers and AI and too much daydreaming about science fiction.

There's no basis at all for speculating that an algorithm can embody consciousness nor is there evidence that the brain even functions as (can be treated as wholly equivalent to) a Turing machine, something else, something outside of our mechanistic reductionist philosophy, is at work.

The theoretical physicist Roger Penrose has been active in this area for years and explains this well in his book The Emperor's New Mind.

This is interesting too, its by Penrose's collaborator Stuart Hameroff.
 
Last edited:
Some people believe that consciousness, self-awareness is something that can "emerge" from an algorithm. That is to say some algorithms are not conscious but some others are and the differences in the algorithms is the key to understanding what consciousness is.

I do not share this specious belief, an algorithm (aka a "Turing machine") is an algorithm and all it has are states and actions, I do not see how one set of states and actions can be conscious whereas some other set of states and actions is not.

This is a mistaken belief based on a poor understanding of computers and AI and too much daydreaming about science fiction.

There's no basis at all for speculating that an algorithm can embody consciousness nor is there evidence that the brain even functions as (can be treated as wholly equivalent to) a Turing machine, something else, something outside of our mechanistic reductionist philosophy, is at work.

The theoretical physicist Roger Penrose has been active in this area for years and explains this well in his book The Emperor's New Mind.

This is interesting too, its by Penrose's collaborator Stuart Hameroff.
There are 100+ pages of information on this very problem.
See; Pseudoscience: Is consciousness to be found in quantum processes in microtubules?

Feast your mind on the state of ORCH OR (Hameroff and Penrose)
 
[...] Anyway, what are your thoughts on how consciousness is created? [...] It baffles me that humanity still hasn't figured out what makes them be still to this day, and I guess it's one of the reasons why I'm so interested in finding out more about the subject.

What is it about systemic microscopic interactions producing guided macroscopic behavior for a biological body or artificial object that you find to be a deficient as a broad principle for explaining successful navigation through an environment? An autonomous car can express mitigated awareness if its surroundings in terms of outward driving performance. The occasional human idiot can walk into a closed door if they are drunk or distracted, so don't fault such vehicles for not being 100% accurate at interpreting received data.

You seem to be yet another generalizer who can't narrow "consciousness" down to anything specific that would be a puzzle in terms of falling out of mechanistic relationships alone. Or there appears to be the suggestion of a difficulty not clearly stated, otherwise there's little point to this thread. (Analogy: Who really wants to discuss a rainy day being the result of rain falling from the sky?)
 
Last edited:
What is it about systemic microscopic interactions producing guided macroscopic behavior for a biological body or artificial object that you find to be a deficient as a broad principle for explaining successful navigation through an environment? An autonomous car can express mitigated awareness if its surroundings in terms of outward driving performance. The occasional human idiot can walk into a closed door if they are drunk or distracted, so don't fault such vehicles for not being 100% accurate at interpreting received data.

You seem to be yet another generalizer who can't narrow "consciousness" down to anything specific that would be a puzzle in terms of falling out of mechanistic relationships alone. Or there appears to be the suggestion of a difficulty not clearly stated, otherwise there's little point to this thread. (Analogy: Who really wants to discuss a rainy day being the result of rain falling from the sky?)

I personally believe that it's not the systemic interactions of each neurons that controls the body to do macroscopic behaviors, but more or less the very fact that the neurons are WILLING to interact certain ways that travels across the network of them and solidifies the fundamentals of the existence(=purpose) of a neuron which operates because the laws of physics makes it inevitable for objects like neurons to emerge in the real world --which is the very signal the neurons send to each other to activate one another and solidify itself. I'd say this is not some kind of a mechanistic relationship, but something that encompasses it and creates it on a whim.

If one ever goes by my logic, one will know that there is a direct relation between my definition of consciousness (which is the neuron cells' very attempts to be active within its environment) and my definition of what makes ourselves move and navigate (which would be the fact that neurons are willing to interact, part of which is willing to acknowledge its state within the network the very moment it starts interacting with another neuron), because for a lifeform to move every moment, it must first acknowledge the boundaries between its existence and any existence outside of itself that is not itself.

Also, regarding your drunk and idiotic human example --- this would be proof that the interactions of neurons do not allow for all of them to come out on top as a perfectly active neuron, and that mostly the individual neurons interrupt another in terms of its primary function of being active, leading for disorder within human consciousness. The difference is that machines such as autonomous cars are already made to be precise in every aspect in a bigger picture, while cells and chemicals making up the human brain are made to be themselves (As we know, they came from nothing else but chemical interactions of existing objects due to the laws of physics.)
 
Neurons are a feature of all nearly all animals, vertebrate and invertebrate. Neuronal function has been extensively researched and characterized in the various popular model organisms: rodents, zebrafish, Drosophila, C. elegans, and many more. Why do you think H. sapiens neurons “try to be more active” than, say, Drosophila fruit fly neurons? Neurons are always active. If neuronal activity is the basis for consciousness then why isn't a fruit fly conscious?
It is.
Consciousness is nothing more than the ability of a neoronal network to differentiate its own body from its environment and act for the continued survival of its body/self.
 
Some people believe that consciousness, self-awareness is something that can "emerge" from an algorithm. That is to say some algorithms are not conscious but some others are and the differences in the algorithms is the key to understanding what consciousness is.

I do not share this specious belief, an algorithm (aka a "Turing machine") is an algorithm and all it has are states and actions, I do not see how one set of states and actions can be conscious whereas some other set of states and actions is not.

This is a mistaken belief based on a poor understanding of computers and AI and too much daydreaming about science fiction.

There's no basis at all for speculating that an algorithm can embody consciousness nor is there evidence that the brain even functions as (can be treated as wholly equivalent to) a Turing machine, something else, something outside of our mechanistic reductionist philosophy, is at work.

The theoretical physicist Roger Penrose has been active in this area for years and explains this well in his book The Emperor's New Mind.

This is interesting too, its by Penrose's collaborator Stuart Hameroff.

I don't believe algorithms or a set of states and actions can, by nature, create awareness either. Instead, I believe what creates awareness is the very fact that the things that create those sets of states and actions, are willing to solidify itself(its purpose, its chemical nature etc.) to each other. These "things" being the neuron cells within our nervous system of course.

On a side note, a good example to counter the Turing Machine argument would be the Chinese Room Experiment.

This disproves that mere algorithms can have awareness because in this scenario, an individual has to be the reason for the correct chinese data lists and should create them themselves with valid reasons. As for the information processing agent in the Chinese Room thought experiment, it only replicates the capability of consciousness and not the actual consciousness/sentience since it does not "control" itself( = be aware of itself, since to control something is to acknowledge that something first). Indeed, it is controlled by external forces.(including the reasons for certain answers to the questions, they have to be controlled by itself as well.)(The logic behind the questions and the answers must be controlled therefore since that is what makes the questions and the answers be **or is a part of them**.)
 
P.S.

Here is pretty much all of my collective attempts to uncover consciousness. You will find that these are just an amalgamation of unorganized and random notes that I happened to write in my spare time, and because of that the wordings make it hard to read through some parts. I just hope you keep that in mind, and still find some ideas worthy of acknowledgement.


https://www.youtube.com/channel/UCN7QNMnbWG5OzZEs-yE5n3A


I hear that diverting members to different websites are not recommended by the rules, but my stance is that sharing ideas about subjects like this are more important than the consequences of breaking the rule. I don’t wanna make a mess by copying and pasting the entire thing(which are around 8,000 words.) so I took my time to convert them into separate videos using a text-to-video converter website.

(I had to split them into five parts, because for some reason the converted videos have a limit for their allowed amount of text. Quite annoying if you ask me.)

If I get banned or punished in some other ways for this, then so be it. I just think it's worth a try, though.
 
Why is everyone asking the hard question, and ignoring the hard facts. We don't even know what the question is or should be.
In order to discover the how, we first need to know what there is and where it is before we can figure out how.

This is why Tegmark proposes to follow the scientific method and examine the machine that is producing consciousness. To examine the hard facts, the hardware, exactly what it does and how brain patterns differ between species and how those different brain patterns produce specific strengths and weaknesses in various species.

A catalogue of brains and what they can do. Perhaps then can we begin to detect subtle differences establish "how" different brains do what?

I think that is a priority. Discover measurable differences and then formulate our questions.
IMO, right now we are doing this all backwards.

This website has a library of "hard facts", assembled by scientists doing the actual research in this area. 100+ pages!
 
Last edited:
It is.
Consciousness is nothing more than the ability of a neoronal network to differentiate its own body from its environment and act for the continued survival of its body/self.

Yes, that’s one definition of “consciousness”. Other people may not adopt that definition.

I imagine it is possible to program a robot to differentiate its own body from its environment and act for the continued survival of its body/self. That robot won’t use neuronal networks. Does a silicon and metal network that fulfils those criteria also qualify?
 
Back
Top